I’m just beginning a short program with MIT on Applied Data Science, and for my Capstone project I’m hoping to pitch (and least conceptually demonstrate) something along the lines of feeding scientific literature into various models to establish paper quality, compare abstracts, and use chatbot models to feed knowledge to users at their current level of understanding. I’m wondering if anyone knows of ongoing AI projects with the Nexus system, and how I might assimilate data from Nexus. This would ensure there was public accountability for all information in the system. I like the idea of letting AI detect patterns and contradictions, weigh papers based on quality factors (study type, sample size, funding patterns, appropriateness of statistical methods, etc.), and then share best scientific guesses, complete with caution against over-certainty and information on whose interests are at play. If I could sufficiently demonstrate the possibility in my project, and secure enough funding from people who’d value this sort of thing, this could lead to a free and open source AI research assistant, usable at most knowledge levels. Would be cool if we called it FountGPT.