Skip to main content
All CollectionsGetting started & FAQs
Responsible AI & Limitations
Responsible AI & Limitations
Allison Elechko avatar
Written by Allison Elechko
Updated over a month ago

Consensus is an AI search engine for scientific and academic research papers. Our goal is to help make research more efficient, accessible, and effective using AI.

We are as excited as anybody about the future of the AI-powered information space.

We also know how powerful these models are and how they must be used carefully, thoughtfully and responsibly - especially when the stakes are high.

Because of this, we have many guardrails in place to help ensure responsible AI usage within Consensus.

Guardrails

  • No black boxes, no fake sources: everything in Consensus starts with a search of scientific literature

    • At our core, we are a search engine, not a chatbot, meaning every source we cite will be a real paper and only one click away

  • Purpose-built AI models: our models that summarize and synthesize information from the research papers we find are explicitly instructed and trained to only generate responses using the text from the Consensus results.

  • Relevancy and confidence thresholding: we have dedicated "checker" models that only allow for certain papers that we deem to be relevant enough to be used in our AI summaries

    • For example, if you type in a question and get back completely irrelevant results, we won’t allow any AI summary feature to be utilized.

    • We have found one of the biggest failure cases of LLM's when generating summaries from a source is if a source does not contain a relevant answer, many time LLM's will "fill in the gaps"

  • Tight feedback loops: We have an in-product support portal and always encourage users to report any mistakes, allowing us to address and resolve issues promptly.

    • We see every support ticket and try to reply within the day, we also have a dedicated slack channel to solicit further feedback


Limitations

While we are proud of the Consensus product today, there are still plenty of limitations and our features will continually be a work in progress.

Here are some of the many limitations to call out for our product today:

  • Research quality is not a part of our AI analysis – a limitation that we cannot wait to address! Currently, each papers counts the same toward the summary regardless if it comes from a meta-analysis or an n = 1 case report. Not all research is created equal and future versions of our features will take into account both paper and journal quality. Sometimes the most relevant answers come from junk research.

  • Our analysis is only as good as our search logic – our pro analysis feature synthesizes the top 10 results that we surface. There will be times where the summary does not answer your question particularly well, but the summary actually did what it was “supposed to do” – our search models just did not do a good job surfacing papers that address your specific research query.

  • We do not have access to ALL research – the Consensus database includes over 200 million peer-reviewed papers. While this represents significant coverage, there is plenty of amazing research that we do have access to. The summary is just a snapshot of some of the relevant research that we have access to, not a fully-comprehensive look into all of the research regarding your question.

  • Hallucinations can occur – Our testing shows that with the right instructions, GPT-4 is significantly less likely to generate content that does not represent the underlying source material compared to previous versions. However, anytime a generative model is being used, there is the possibility of it creating an answer that is not based on reality. Always read the papers that power the summary when coming to a final answer to your question!


Final Thoughts

Our mission is to make the best science accessible to everyone through AI, but responsible use is essential. While we continuously improve Consensus with more guardrails, we recognize the technology's limitations and are committed to give you transparency every step of the way.

We encourage you to stay engaged with our platform by exploring the source materials, sharing any feedback, and reporting inaccuracies through our support portal. Your feedback helps us improve and make AI-powered research tools more accurate for everyone.

Thank you for being a part of our journey!


If you have questions or need assistance, reach out to our support team at [email protected] or via the Support Chat in the interface. We’re here to help!

Did this answer your question?