From Detection to Pattern Recognition: Designing AI That Helps Experts Investigate

AI-Enhanced Research Screening & Investigation

In 2023, Morressier launched Integrity Manager to address integrity issues in scholarly publishing. To sustain the product's momentum and unlock sales opportunities, we took a new approach to sought to elevate the product market appeal.

Go to Website

Information

Morressier is ...

A platform that focuses on providing publishing tools for Scholarly Publishers and Societies, with submission management workflows and AI-powered research integrity solutions.

integrity Manager is ...

A specialized AI system focused on protecting research owned by Publishers and Research Institutes. It provides users with preflight and integrity checks that suggest integrity measures to take, all in a single dashboard for ease of use.

Research integrity is...

The practice of conducting research in ways that promote trust and confidence in all aspects of science. This includes overseeing the whole research lifecycle, from hypothesis to experiment, through to publication and the formation of the scientific record.

THE CHALLENGE

In late 2023, Morressier faced setbacks that blocked the company from boosting Integrity Manager sales. Our customers valued the product’s integrity checks, but they needed tools to act on detected compromises, not just identify them. Our current offering also added extra steps, reducing efficiency, so our team lead a Design Sprint among other inititiatives to bring actionability and efficiency to the forefront.

TIMELINe

9 months: Continuous improvement and UX upgrades September 2023 to February 2024

TEAM

B2B team with 2 Fullstack Engineers, 1 QA Engineer, 1 Product Manager, 1 Engineering Manager and a User Researcher (with company wide obligations). I took on the role of Senior Product Designer and UX Researcher.

Impact
Deals

6 closed deals with medium and large publishers

Prospects

3 New Enterprise Prospects in Pipeline

engagement

Increased user engagement with integrity reports through actionable interfaces

FINALIST

Award for Innovation in Publishing by Association of Learned and Professional Society Publishers (ALPSP) 2024

Why Discoverability is not Enough

Imagine you're a Research Integrity Officer at a major publishing house. You've just discovered potential data manipulation in a breakthrough cancer research paper that's about to be published. Every hour you spend figuring out next steps isn't just an inconvenience - it's potentially allowing compromised research to influence critical medical decisions.

  • For publishers, the stakes are incredibly high. A single missed or delayed action on compromised research can:
  • Damage a journal's reputation built over decades
  • Waste valuable research funding
  • Misdirect future research efforts
  • Erode public trust in scientific publishing

This isn't theoretical - in 2020, The Lancet and The New England Journal of Medicine (NEJM) had to retract a high-profile COVID-19 study that had already influenced global health policies. The paper, which raised concerns about the safety of certain COVID-19 treatments, led to the World Health Organization temporarily halting several clinical trials. By the time questions about data integrity triggered the retraction, the paper had already affected medical decisions worldwide and shaken public trust in COVID-19 research. Fast, effective integrity checking could have prevented this cascade of consequences. You can read all about this case here

The Problem

While Integrity Manager successfully detected research misconduct, publishers struggled to investigate findings effectively. The AI would flag "23 failed checks," but integrity officers needed to see patterns across those checks to apply their expert judgment.

Initial feedback from sales calls in mid-2023 revealed a critical gap. As one customer put it: "We can see the problems, but what do we do next?"

The challenge became apparent in early adoption metrics. Despite strong market interest, we faced difficulties closing sales. Publishers needed more than just detection. They needed tools that surfaced patterns for their investigation, not exhaustive lists. They needed AI that helped them recognize what to look for, then enabled them to act on their own conclusions.

Current State Analysis

Our analysis revealed three critical gaps in the platform:

PATTERN BLINDNESS

Users struggled to identify meaningful patterns across 23+ individual check failures. The interface presented data as isolated items rather than grouped insights. Integrity officers spent more time parsing the list than investigating the actual issues.

Key insight: Professionals are trained pattern recognizers, but our AI was presenting noise instead of signal.

EXPERT JUDGMENT UNDERMINED

The platform delivered verdicts ("FAILED 23 checks") rather than surfacing patterns for professional assessment. Publishers needed AI that helped them see what they might have missed, not AI that told them what to conclude.

Key insight: In high-stakes environments, professionals reject AI that claims certainty.

Workflow Inefficiency

The manual process of reviewing findings, grouping related issues, creating investigation pathways, and deciding next steps created significant burden. This became especially problematic when dealing with multiple issues, forcing users to create their own systems for managing investigations.

Key insight: Detection without investigation support doesn't solve the publisher's actual problem.

The Journey to Solutions

The path from detection to action wasn't straightforward. While our integrity checks successfully identified potential misconduct, publishers were still spending days investigating issues, switching between multiple tools, and manually tracking their progress. Our journey to solve this began with a collaborative design sprint that brought fresh perspectives to the problem, followed by intensive design work to transform initial concepts into scalable solutions.

Through this process, we discovered that effective actionability isn't just about adding buttons or features - it's about understanding the complex decision-making process publishers go through when investigating integrity issues, and creating tools that support each step of that journey.

Phase 1 • 7 days in OCT. 2023

Design Sprint

Rather than rushing to solutions, we adopted a collaborative design sprint that brought fresh perspectives to the problem. With backing from our CTO and Product VP, we assembled a focused team to explore how we could transform Integrity Manager from a detection tool to an investigation platform.

Sprint Team

I led the sprint alongside another designer on our platform team, Leandro. Bringing our unique perspectives as Senior Product Designers deeply familiar with Morressier's product ecosystem, our combined experience across various Morressier products gave us comprehensive insight into how integrity checks could be better integrated and made more actionable. Thomas Fortmann, our Staff User Researcher, provided crucial support in conducting both internal and external usability tests. His expertise helped ensure we were gathering meaningful feedback throughout the sprint. Mădălina Pop, the Integrity Team Product Manager, took on the critical role of Decider, keeping us aligned with our sprint goals and making key decisions when needed.

Defining the Sprint Goal

Getting started with the sprint, we sorted through questions and ideas that needed organizing. The first step was writing everything down as "How Might We" (HMW) questions. We then voted on which ones felt most important to tackle:

  • HMW help users recognize patterns across multiple integrity checks
  • HMW surface meaningful groupings instead of exhaustive lists?
  • HMW present findings as indicators for expert judgment, not verdicts?
  • HMW enable professionals to investigate patterns and act on their conclusions?

Increase workflow efficiency across all integrity investigation touchpoints

Through this process, we discovered that effective actionability wasn't about adding buttons or features. It was about understanding how publishers think when investigating integrity issues and creating tools that support each step of that journey.

Sprint Structure

After nailing down our goal, we got straight into the work. Here's how we broke it down:

  • Discovery
  • Ideating & Sketching
  • Testing Prototype
Discovery

We kicked off by digging into all the client feedback we had. This was crucial - we needed everyone on the same page and properly informed before we started throwing around solutions. Having this solid foundation made it way easier to focus on the real issues and map out proper processes.

Ideating & Sketching

Before jumping into sketching, we took a look at our design backlog to see if there was anything we could learn from previous work. Then, with fresh inspiration, we dove into sketching sessions. When it came time to vote on which sketches to develop further, we focused on ideas that could give users both helpful context and clear actions to take. We needed to separate the must-haves from the nice-to-haves - keeping our sprint goal front and center helped with this. We brought all these pieces together in a solution assembly session with the whole sprint team, piecing together a complete solution based on the sketches we'd picked.

Testing Prototypes

By day 4, we were ready to start prototyping. Leandro and I took turns building and testing the prototype, while also getting the usability testing scripts ready.
We wanted to stay flexible and prepared for the next day's tests, making sure we could keep gathering useful data as we went. To get balanced feedback, we spent two days testing the prototype with six different people:

  • 4 internal team members who really knew their stuff about research integrity
  • 2 customer representatives: a Head of Research Integrity and Peer Review and a Research Integrity Manager from a large publishing house (one that brings in over $8m annually, with publications making up more than 50% of their revenue)
Solutions

A key part of our sprint focused on defining what true actionability would look like in the platform. We weren't just thinking about random actions - we needed a cohesive system that would support publishers throughout their investigation process.
Here's what we came up with:

Smart Insights

Instead of just showing individual check results, we designed a way to combine related checks to tell a bigger story. For example, by looking at over-citation, duplicated references, and self-citations together, we could indicate potential citation manipulation. This helps users quickly understand the bigger picture of what's happening with a paper.

Check Acknowledgment

We created a way for users to mark flagged checks as "acknowledged" after they've looked into them and decided they're not actual problems. This helps teams keep track of what's been reviewed versus what still needs attention.

Activity Tracking

We developed an activity log that shows all actions taken within a report - who acknowledged checks, who shared the report, and any other interactions. This creates a clear audit trail of the investigation process.

Report Sharing

We added the ability to share reports with team members, making it easier for publishers to collaborate on investigations rather than working in isolation.

Investigation Conclusion

We created a clear endpoint for investigations where users can mark a paper as either acceptable or unacceptable after reviewing all checks and insights. This decision directly impacts whether the paper moves forward in the publication pipeline.

These solutions were designed to work together, creating a smooth workflow from initial check to final decision.

Sprint Findings

After completing our tests, we dug into the patterns and insights that emerged. Here's what we learned:

Words Matter!

Copy turned out to be a huge part of the Integrity experience - it's what drives users to take action, especially for power users like our target audience. This was particularly important in workspace pages and permission settings. Even in features that tested well, like email issue reports, the copy needed to be solid to build trust and get users to act.

Small Actions, Big Impact

We found out that simple actions like sharing a report or marking an issue as resolved were more complex than we thought. For example, sharing integrity results turned out to be pretty sensitive - we had to be careful about who gets access to maintain research integrity. The activity logs were a hit though, as they helped multiple reviewers stay in sync about which issues were already handled.

Simplicity Wins

As designers, we naturally want to optimize everything. We thought grouping checks with recommendations would make things clearer and help users focus on tasks. But users proved us wrong - they actually preferred seeing all checks at once! Sometimes simpler really is better.

Interactive Elements Need Action

The analytics dashboard taught us something important: pretty data isn't enough. Users wanted to:

  • Click on indicators for more details
  • Get actionable recommendations (like steps to verify suspicious authorship)
  • Collapse data points
  • See which showed certain patterns and indicators

They also expected guidance on using the data and wanted contextual details like manuscript IDs and journal names to help their investigations. Without these practical tools, users were less likely to take action on what they found.

These findings didn't just shape our immediate designs - they set the foundation for how I would develop the concept further after the sprint.

Phase 2 • JAN. to MAR. 2024

Making Data Actionable

After the sprint, work continued with just me and the Integrity team. While the sprint had given us great insights with the wider group, it was now time to focus on detailed implementation with my direct team. Our first priority was clear: transform our dashboards from simple data displays into interactive tools for investigation. The sprint had shown us that users needed more than just pretty visualizations - they needed quick ways to dig deeper into the data.

Making Numbers Mean Something

When users see that 22 papers have plagiarism issues, their next thought is "Which papers are these?" We redesigned our analytics to answer this question instantly. Every data point became a gateway to more detailed information:

  • Clickable bars in charts to reveal all related papers
  • Filtering mechanisms that work across different organizational levels
  • Direct paths from statistics to specific papers needing attention
Hierarchical Intelligence

We also tackled the challenge of scale. Publishers needed to investigate issues across different organizational levels:

  • Journal Level: See patterns across all volumes
  • Volume Level: Focus on issues within a specific publication period
  • Paper Level: Dive into individual submission details

For example, if a publisher is looking at Journal-level analytics and clicks on a plagiarism indicator, they can now see all affected papers across every volume of that journal. This hierarchical approach helps publishers spot broader patterns while maintaining the ability to investigate specific cases.

This development phase was about more than just adding clickable elements - it was about creating a natural flow from identifying problems to investigating them, making each piece of data a starting point for action.

PHASE 3 • APR TO JUL. 2024

Refining Check Actions and Feedback

Following our work on analytics, my team and I tackled one of the more nuanced challenges revealed in our sprint testing: the check acknowledgment feature. What seemed like a simple feature initially turned out to be quite complex when we dug deeper into user expectations and needs.

Language

During our sprint testing, we'd discovered something crucial about how language affects user behavior. The term "acknowledge" was creating unexpected confusion among our testers. Some thought they were marking false positives, others believed they were just indicating they'd seen the issue, and some thought they were validating the check's result. This ambiguity was particularly problematic in research integrity, where precision and clarity are essential. We realized we needed to do more than just change a few words - we needed to rethink how we communicated these actions to align with how publishers actually make decisions about research integrity issues.

Making Actions Clear

The above insight led us to completely redesign our action system. Instead of a single "acknowledge" button, we created a more flexible, bi-directional system that better reflected the nuanced decisions publishers make.

  • When a check fails, users can "Mark as Acceptable" if they've investigated and found no real issues.
  • When a check passes but something seems off, they can "Mark as Issue" to flag it for further investigation

We also added the ability to reverse these decisions with "Revert to Issue" and "Revert to Acceptable" options - because we learned that investigation findings can change as new information comes to light.

To ensure these decisions were well-documented, we made it mandatory to provide a reason for each action. This wasn't just about collecting data; it was about creating a clear record of the decision-making process that would be valuable for future reference.

Contextual Activity Tracking

The final piece of this puzzle was making all these actions transparent and trackable. We developed a check-level activity tracking system that appears the moment any action is taken. Rather than hiding this information in a separate section, we made it immediately accessible through an activity icon that appears right after the first action on a check.
This approach created a natural flow: take an action, see the activity icon appear, click to review the history. Every decision, every status change, and every provided reason is captured chronologically, giving teams the full context they need to understand how and why decisions were made throughout the investigation process.

Design Principles Discovered

Through this redesign, we established four principles for AI in high-stakes professional environments:

Indicators, Not Verdicts

Present findings as "indications of concern" not "failures." Preserve professional authority.

Patterns, Not Lists

Group findings into 3-5 meaningful categories rather than 23+ exhaustive items.

Investigation Support, Not Just Detection

Provide tools that help professionals investigate patterns, not just identify them.

Transparency for Trust

Show how AI reached conclusions so professionals can verify or override suggestions.

These principles apply beyond publishing: medical diagnosis, legal review, financial compliance basically anywhere professional judgment matters more than AI certainty.

Conclusion

Our focus on suggestive disclsure has redefined how Integrity Manager serves its users. Starting with a design sprint that revealed key insights, we transformed static alerts into interactive tools for investigation. By making data meaningful and actions clear, we've created a platform that doesn't just tell publishers what's wrong - it helps them make informed decisions about how to make it right.

Results

Our journey from the design sprint through the development phases has already shown significant impact, even as we continue to evolve the platform.

Impact
Deals

6 closed deals with medium and large publishers

Prospects

3 New Enterprise Prospects in Pipeline

engagement

Increased user engagement with integrity reports through actionable interfaces

FINALIST

Award for Innovation in Publishing by Association of Learned and Professional Society Publishers (ALPSP) 2024

Our focus on actionability has transformed how publishers interact with integrity checks:

  • Data points are no longer just numbers - they're gateways to investigation
  • Check statuses can be confidently overridden with clear documentation
  • Teams can track decision-making processes at both broad and granular levels
  • Publishers can investigate issues across their entire organizational hierarchy
Lessons Learned & Growth

What started as a design sprint exploring basic actions evolved into a deeper understanding of how publishers work with integrity checks. We learned that effective integrity management isn't just about finding problems - it's about guiding users confidently through the resolution process.

Key Takeaways:

  • Context is crucial - users need more than just data to make decisions
  • Clear language drives confident action
  • Activity tracking needs to happen at multiple levels
  • Analytics must provide clear paths to investigation
Next Steps

We've already started exploring designs for several crucial features that will enhance how publishers manage their integrity investigations. Initial concepts and user flows are being developed for:

Investigation Conclusion

We're developing a comprehensive way for users to formally conclude their investigations. This will enable teams to compile findings from multiple checks and document final decisions. Publishers will have control over what happens to papers in their publication pipeline while maintaining a clear record of all concluded investigations for future reference.

Report Sharing and Escalation

The platform will soon support more sophisticated collaboration through secure report sharing with specific team members. We're designing clear escalation paths for cases requiring expert review, with role-based access control for sensitive information. This will create clear workflows for transferring investigation ownership when needed.

Author Communication

We're working to integrate author communication directly into the platform. This includes creating templates for common clarification requests and establishing secure channels for author responses. All communications will be documented and integrated into the investigation record, ensuring a complete history of the investigation process.