← Back to project home

AI Implementation Roadmap

Phased approach to automating procurement contract review

Strategy is designed to deliver value immediately with a manual workflow, then progressively automate as Omnia integration matures. Each phase builds on the previous one and can be paused or adjusted based on what we learn. The guiding principle: don't build what we'll have to maintain if the platform can do it natively.

1
Manual Document Review
Start Here

Morgan uploads documents manually, AI reviews against playbook rules

  • A simple interface where Morgan or Rory can upload an SOW and/or MSA
  • AI reviews the documents and returns a structured report covering:
    • Document type classification - is this actually an SOW, MSA, NDA, or online terms?
    • Completeness check - are required fields present (quantities, pricing, term length, payment terms, SKUs)?
    • Playbook compliance - flag auto-renewal clauses, payment terms worse than net 60, missing price caps, unlimited increase language
    • Risk items - any unusual clauses or language that warrants closer human review
  • Output is a structured checklist Morgan can act on immediately
  • Zero integration dependencies - works today with no API access or Omnia changes
  • Morgan already tested contract review in Glean and was "very impressed" by the detail
  • Validates accuracy before we connect it to anything automated
  • Immediate time savings on the ~40% of submissions that are incomplete or wrong doc type
  • Encode the negotiation playbook rules (payment terms, auto-renewal, price caps, etc.) as structured review criteria
  • Build a prompt pipeline that processes uploaded documents against the criteria
  • Return results as a clear pass/fail checklist with specific findings
  • Correctly classifies document type (SOW vs MSA vs NDA vs online terms)
  • Catches the major playbook violations Morgan would catch manually
  • Morgan and Rory trust the output enough to use it as a first pass
Build vs. Buy Decision: This could be a standalone tool or a Glean agent, depending on whether the output needs to go back into Omnia/Jira. Since the initial use is just Morgan reviewing the report, a lightweight build is fine. Glean is not ideal here because the value is in structured output back to the requester, not company-wide search.
2
Omnia API Integration
Next

Auto-trigger document review when a procurement request hits the queue

  • Phase 1 validated - AI review is accurate enough to trust
  • API access to Omnia confirmed (meet with Morgan's Omnia rep)
  • Understand Omnia's native capabilities - they may already support some of this
  • Event trigger: When a new procurement request is submitted in Omnia with attached documents, automatically pick them up for review
  • Same AI review pipeline from Phase 1, now triggered automatically instead of manually
  • Results written back to the Omnia ticket as a structured review summary
  • Auto-reject for clear failures: If the system is highly confident the document is incomplete (e.g., NDA uploaded instead of MSA), send it back to the requester immediately with guidance - before it ever reaches procurement
  • Eliminates the 1-2 week delay where bad documents sit in the queue waiting for human review
  • Requester gets feedback in minutes, not weeks
  • Procurement team only sees requests that have already passed basic validation
  • Review happens in parallel with FP&A/manager approval - no sequential bottleneck
Key Question for Omnia Rep: Does Omnia have webhook/API support for new request events? Can we write results back to the ticket via API? Does Omnia have any native document validation or AI capabilities we should use instead of building?
3
Contract Comparison and Redlining
Future

AI compares new contracts against previous versions and generates redline suggestions

  • Phase 2 running - automated document intake is working
  • LinkSquares integration or API access for pulling historical contracts
  • Sufficient historical contract data accessible programmatically
  • Contract diff engine: For renewals, automatically pull the previous SOW/MSA from LinkSquares and compare against the new submission
  • Change detection: Highlight every difference between old and new - price changes, new clauses, removed protections, term modifications
  • Risk scoring: Flag changes that violate playbook rules (price increase >5%, new auto-renewal clause, removed caps)
  • Redline generation: For identified issues, suggest specific contract language changes that procurement can send directly to the requester
  • Suppliers - especially large ones - frequently sneak in unfavorable terms assuming Contentful won't read carefully
  • Manual comparison is time-consuming and error-prone
  • Morgan's goal: give the requester something they can forward directly to the supplier
  • AI is well-suited to exhaustive document comparison at speed
Integration Path: LinkSquares already integrates with Omnia. If we can query LinkSquares for the most recent agreement with a given supplier, the diff can be fully automated for renewals. New vendor contracts would still go through Phase 2 review only.
4
Full-Cycle Automation
Horizon

End-to-end automation from submission to requester-ready feedback

  • Requester guidance at submission: AI assistant helps the requester understand what documents they need (MSA vs SOW vs NDA), checks uploads in real-time, and prevents bad submissions entirely
  • Parallel pre-review: The moment documents are uploaded - even before FP&A or manager approves - the AI runs completeness, playbook, and comparison checks
  • Auto-generated requester responses: For standard issues (wrong doc type, missing fields, playbook violations), generate the exact feedback message or redlined document the requester needs to send back to the supplier
  • Procurement dashboard: When the request reaches Morgan and Rory, the AI review is already complete. They validate the AI findings, handle the nuanced cases, and approve
  • Learning loop: Track which AI findings procurement agrees/disagrees with to refine accuracy over time
  • Complex implementation fee structures that vary by supplier type
  • Large vendor negotiations where leverage and relationship context matter
  • Novel clause types the playbook doesn't yet cover
  • Judgment calls on when to push back vs. accept slightly non-standard terms
  • These are the cases Morgan and Rory should be spending their time on - not catching missing line items
Phase Trigger AI Capability Integration Needed
Phase 1 Manual upload Doc classification, completeness check, playbook compliance None
Phase 2 Omnia event Same as Phase 1, auto-triggered + auto-reject Omnia API
Phase 3 Omnia event (renewals) Contract diff, change detection, redline generation Omnia API + LinkSquares
Phase 4 Submission-time Guided intake, parallel pre-review, auto-response drafting Omnia (deep), LinkSquares, Jira history

Immediate Next Steps

  1. Schedule meeting with Morgan's Omnia rep to understand API capabilities, webhook support, and any native AI/validation features Omnia already offers
  2. Get a few sample contracts from Morgan (one clean SOW, one incomplete SOW, one with playbook violations) to build and test the Phase 1 review pipeline
  3. Encode the negotiation playbook as structured review criteria - Morgan to confirm the full rule set beyond what was discussed in discovery
  4. Build Phase 1 prototype and have Morgan/Rory test it against contracts they've already reviewed to validate accuracy