A single search interface that queries 12+ academic databases at once, including Primo, EBSCO, arXiv, DOAJ, Semantic Scholar, CrossRef, CORE, IEEE, and Elsevier. Results come back with APA 7 citations and relevance scores.
Researchers at the institution were spending hours every week on searches that still missed important sources.
Every literature review started the same way: open Primo, run a search, copy results. Switch to EBSCO, re-enter the query, sift through overlapping results. Move to arXiv, then DOAJ, then Semantic Scholar. Each database had its own interface and quirks, its own way of returning results. A thorough search could eat up an entire day before a researcher even started reading.
The citation problem compounded everything. Each database exported references in a slightly different format. Researchers cobbled together bibliographies by hand, toggling between APA guides and database exports, fixing capitalization, italicization, and DOI formatting one entry at a time. A single misplaced comma could mean points deducted or a manuscript returned for revision.
The library team saw that this fragmented workflow wasn't just slow. It created blind spots. Researchers who stuck with two or three familiar databases consistently missed relevant work published elsewhere. The institution needed something that collapsed these silos into one search.
The solution needed parallel API calls, solid deduplication, and a citation engine that got APA 7 right down to every italic and comma.
Built asynchronous connectors for 12+ academic APIs (REST, OAI-PMH, and proprietary protocols) with rate limiting, retry logic, and response normalization into a common schema.
Developed a ranking algorithm that weighs title match, abstract relevance, citation count, recency, and source authority to put the best results first.
Implemented a rule-based citation formatter handling journals, books, conferences, preprints, and web sources with proper italicization, DOI linking, and author truncation rules.
Parsed abstracts to pull out findings and methods so researchers can scan a summary before committing to the full paper.
See how a single search engine can speed up your research workflow.
Let's TalkSix features that turn a fragmented, multi-hour workflow into a single search.
Enter a query once and get results from Primo, EBSCO, arXiv, DOAJ, Semantic Scholar, CrossRef, CORE, IEEE, Elsevier, and more, all on one results page.
Every result comes with a correctly formatted APA 7 citation, generated on the spot. Handles journals, books, conference papers, preprints, and web sources.
Abstracts are parsed to pull out findings and methods. Researchers can scan dozens of papers in minutes instead of hours.
The algorithm weighs title match, abstract relevance, citation count, recency, and source authority to rank results by actual usefulness.
Narrow results by database, document type, date range, or open-access status. Mix and match filters to find exactly what you need.
Save searches for later, build citation lists across sessions, and export complete bibliographies ready to paste into any document or reference manager.
Three views that illustrate the search-to-citation pipeline.
One search bar fires queries across 12+ databases in parallel. Results stream in as each API responds, with a progress indicator showing which sources have reported back.
Results are deduplicated, ranked by relevance score, and annotated with takeaways from each abstract. Scanning and evaluating papers gets much faster.
Select any combination of results and export a correctly formatted APA 7 bibliography, ready to drop into a paper, thesis, or reference manager.
The numbers tell the story.
What we learned building this, and what we'd carry into the next project.
Every database returns data differently: field names, date formats, author structures all vary. Building a solid normalization layer early prevented cascading bugs downstream and made adding new sources easy.
APA 7 has dozens of edge cases. The et al. rules change at different author counts, DOIs need specific formatting, and whether a title gets italicized depends on the source type. Treating citations as a dedicated engine rather than string concatenation was the right call.
Displaying the source database alongside every result built researcher confidence. When users can see that a finding appears in both Semantic Scholar and CrossRef, they trust the relevance ranking and adopt the tool faster.
If you need a custom search integration, citation engine, or academic workflow tool, We'd like to hear about it.