When we shipped the Scispot MCP server, the technical story was straightforward: Model Context Protocol, a Lambda endpoint, OAuth with your API token, and 27 tools mapped to Labsheets, ELN, manifests, storage, Labflow, and images. The human story took longer to tell. Labs did not wake up wanting "MCP." They wanted to stop copying sample IDs into chat windows. They wanted to ask a question in plain English and have the answer come from the system that already holds the truth - not from a PDF export that will be wrong by tomorrow.
This article is for operators, lab leads, and computational scientists who are past the hype curve and into the hard part: what do we actually do with AI if we are serious about traceability? Below are the five use cases we see most often once a team connects Claude or another MCP client to Scispot. They are not theoretical. Each one maps to tools that are live today, as described in Scispot customer documentation.
The thread that ties all five together
Every use case shares one constraint we refused to compromise: no shadow copy of your data. The assistant does not scrape a spreadsheet you emailed yesterday. It calls the same Scispot API as the UI. When it adds a row, updates a protocol, or resolves a barcode to a manifest, that action is logged with your permissions and your audit story. Authentication uses OAuth 2.0 + PKCE with your Scispot API token; unauthenticated requests get 401. That is the difference between "AI for labs" as a slide deck and AI as something a quality team can reason about.
With that frame, the use cases sort themselves naturally - from daily data work, to notebook narrative, to physical traceability, to cross-functional handoffs, to the governance model that makes the first four acceptable in regulated environments.
.png)
1. Labsheet intelligence: search, filter, update, and link without a click marathon
Labsheets are where structured lab reality lives: assay buffers, cell banks, reagent lots, instrument run metadata, anything you modeled as rows and columns. The painful pattern we hear is not "we cannot store the data." It is "finding the right rows across folders, applying the right logic, and updating the right cells takes long enough that people take shortcuts."
With MCP, an assistant can list labsheets, pull schemas, and search or filter rows using AND/OR logic - the same expressiveness you would want in a careful UI query, but driven from natural language. You can fetch a row by UUID, add or update rows, and navigate folder hierarchies so the model does not pretend your workspace is flat when it is not.
Where this becomes operational, not gimmicky, is linking. A row in a Labsheet is not an island if it is tied to an experiment, a protocol, or documentation in your ELN. The MCP tools let the assistant connect structured data to narrative context in one flow. That is the kind of work that used to mean alt-tabbing between grids and notebook entries, or batching updates for "later" - which is how traceability gaps appear.
Example prompts: "List labsheets under the Assay Development folder and show the schema for the plate tracking sheet." "Find every row where Lot ID is ABC-4402 and Status is not Released." "Add a row for today's prep and link it to protocol version 3.2 in Labspace X." Each call executes in your workspace; nothing is staged outside Scispot.
2. ELN copilot: draft, append, and connect experiments at the speed of conversation
Electronic lab notebooks die a little every time someone says, "I will paste the methods section when I have time." The ELN is only as good as the link between what happened on the bench and what got written down. Teams want help turning rough notes into structured entries, appending results tables, and tying those entries back to the Labsheet rows that hold the quantitative truth.
The MCP server exposes full ELN-oriented tooling: list Labspaces, list experiments, protocols, and documentation by location, create stubs, append or prepend HTML or text, and link Labsheet rows to any of those entities. That is not read-only assistance. It is participation in the same CRUD and linking model your scientists use in the UI.
This is where the story we heard from a lab manager becomes real: find every experiment that used batch X and add a consistent note. Without MCP, that is either a manual sweep or an export. With MCP, the assistant can search the structured side (Labsheets, manifests, Labflow) and write back into the ELN where the narrative belongs - still under your roles, still in the audit trail.
Example prompts: "List experiments in Labspace 'Upstream Process' created this month." "Append today's deviation summary to experiment EXP-2184." "Link rows from the Culture Viability labsheet to protocol DOC-771." The goal is not to remove human judgment; it is to remove repetitive glue work that keeps the ELN out of date.

3. Inventory and traceability: manifests, barcodes, storage, and Labflow samples
Physical lab work does not fail in the database. It fails when someone cannot answer, fast, which box, which rack, which plate, which lineage. The MCP tools include manifest retrieval by HRID, UUID, or barcode, listing root-level storage locations to walk freezers and rooms, listing sample UUIDs for a Labflow, and resolving those UUIDs to full Labsheet rows. Image thumbnail retrieval adds visual context when a record has an attachment worth inspecting.
Use case three is the answer to "I know the barcode; I do not know which menu path gets me there." It is also the answer to audit prep when an investigator asks for a chain from sample list to container to experiment - the assistant can assemble that story from live data instead of from three exported CSVs that disagree.
Example prompts: "Resolve barcode LN2-88421 to a manifest and summarize contents." "List root storage locations and show what is under Freezer F2." "Pull Labflow 'Incoming QC' sample UUIDs and fetch the full row for each from the Sample Registry labsheet." Speed here is not laziness; it is fewer moments where someone guesses because lookup is too slow.
4. Computational and cross-functional handoffs: structured context without export theater
Computational biologists and data scientists keep telling us the same thing: they do not need another pretty dashboard. They need stable, queryable context - sample metadata, run parameters, protocol versions - that matches what the bench believes is true. When that context requires exports, someone reconciles columns, someone else fixes encoding, and the model trains on a slightly wrong picture of reality.
MCP does not replace Python, R, or your lakehouse. It gives an assistant sitting next to your analyst a governed way to pull the slice of Scispot needed for a report, a methods section, or a sanity check before code runs. Because the tools return structured results from the API, the human can paste into a notebook or pipeline knowing the numbers came from the system of record minutes ago, not from a file that has been sitting in Downloads since Tuesday.
This use case also helps project leads who sit between teams. They can ask for a consolidated view - which experiments touched a given reagent lot, which Labsheet holds the authoritative concentrations - and get answers that are safe to forward because they were never outside Scispot.
Example prompts: "Summarize all experiments linked to Labsheet rows where Reagent Lot is RL-009." "Pull schema and last ten rows from the Sequencing Run Log for handoff to bioinformatics." "List documentation in Labspace 'Analytical' that references Project Thor."
5. Compliance-safe automation: the use case that makes the other four admissible
The fifth use case is meta, but it is the one quality and IT ask about first. Can we allow this without opening a side door? The design answer is yes, because MCP is not a bulk export channel. It is a thin orchestration layer: Claude (or any compatible client) proposes actions; the Lambda validates your Bearer token; Scispot executes. Same permissions as the user. Same logging as UI actions. Protocol version 2025-03-26 on the wire, implemented so that adding tools does not multiply risk surfaces - each tool is still just an API call you could script yourself, only now discoverable through a standard protocol.
That matters when you write AI policies. You can say: assistants may use MCP against Scispot with named tokens and named users, and you can trace what changed. You do not have to choose between blocking AI entirely and tolerating paste-heavy shadow workflows. The fifth use case is governed scale - letting teams automate repetitive query-and-update patterns while keeping the compliance story intact.
Where MCP sits next to the UI, scripts, and integrations
None of these use cases argue that every lab task should move to natural language. The UI is still the right place for nuanced layout work, for training new hires on physical layout, and for one-off actions where typing a full sentence would be silly. Your existing Python or R jobs should keep calling the Scispot API directly when you want deterministic batch pipelines with version-pinned code. Webhooks and ETL still matter when you are mirroring data to a warehouse for enterprise reporting.
MCP fills the gap between those modes: exploratory work, cross-object questions, and semi-structured updates where writing a full script would take longer than the task deserves, but using the UI would mean dozens of clicks. It is also the right interface for assistants that already live in Claude, Cursor, or future MCP-native clients - you meet scientists where they work without building a custom chatbot per workflow.
Internally, we have watched teams start with read-heavy prompts (list, search, summarize) for a week or two, then graduate to linking and small writes once they trust the permission model. That progression mirrors how labs adopted ELNs in the first place: shadow on the side, then parallel, then system of record. The difference now is that the assistant never holds a stale export; it always reads what Scispot reads.
How to try it
If you are already on Scispot, you can request early access by going to https://survey.scispot.io/mcp. Once accepted, you can follow the setup instructions in Scispot customer documentation for Claude Code or Claude Web. Start with read-only prompts until your team is comfortable, then expand into updates and links.
If you are evaluating lab platforms, ask vendors hard questions: Is every action you care about API-addressable? Can an assistant act inside the system rather than on a copy? The five use cases above are how we think those questions should be answered in 2026 - not with a slide about "AI," but with concrete tools, a public endpoint, and an audit trail that still makes sense on inspection day.








