
In 2023, Morressier launched Integrity Manager to address integrity issues in scholarly publishing. To sustain the product's momentum and unlock sales opportunities, we took a new approach to sought to elevate the product market appeal.
A platform that focuses on providing publishing tools for Scholarly Publishers and Societies, with submission management workflows and AI-powered research integrity solutions.
A specialized AI system focused on protecting research owned by Publishers and Research Institutes. It provides users with preflight and integrity checks that suggest integrity measures to take, all in a single dashboard for ease of use.
The practice of conducting research in ways that promote trust and confidence in all aspects of science. This includes overseeing the whole research lifecycle, from hypothesis to experiment, through to publication and the formation of the scientific record.
In late 2023, Morressier faced setbacks that blocked the company from boosting Integrity Manager sales. Our customers valued the product’s integrity checks, but they needed tools to act on detected compromises, not just identify them. Our current offering also added extra steps, reducing efficiency, so our team lead a Design Sprint among other inititiatives to bring actionability and efficiency to the forefront.
9 months: Continuous improvement and UX upgrades September 2023 to February 2024
B2B team with 2 Fullstack Engineers, 1 QA Engineer, 1 Product Manager, 1 Engineering Manager and a User Researcher (with company wide obligations). I took on the role of Senior Product Designer and UX Researcher.
6 closed deals with medium and large publishers
3 New Enterprise Prospects in Pipeline
Increased user engagement with integrity reports through actionable interfaces
Award for Innovation in Publishing by Association of Learned and Professional Society Publishers (ALPSP) 2024
Imagine you're a Research Integrity Officer at a major publishing house. You've just discovered potential data manipulation in a breakthrough cancer research paper that's about to be published. Every hour you spend figuring out next steps isn't just an inconvenience - it's potentially allowing compromised research to influence critical medical decisions.
This isn't theoretical - in 2020, The Lancet and The New England Journal of Medicine (NEJM) had to retract a high-profile COVID-19 study that had already influenced global health policies. The paper, which raised concerns about the safety of certain COVID-19 treatments, led to the World Health Organization temporarily halting several clinical trials. By the time questions about data integrity triggered the retraction, the paper had already affected medical decisions worldwide and shaken public trust in COVID-19 research. Fast, effective integrity checking could have prevented this cascade of consequences. You can read all about this case here
While Integrity Manager successfully detected research misconduct, publishers struggled to investigate findings effectively. The AI would flag "23 failed checks," but integrity officers needed to see patterns across those checks to apply their expert judgment.
Initial feedback from sales calls in mid-2023 revealed a critical gap. As one customer put it: "We can see the problems, but what do we do next?"
The challenge became apparent in early adoption metrics. Despite strong market interest, we faced difficulties closing sales. Publishers needed more than just detection. They needed tools that surfaced patterns for their investigation, not exhaustive lists. They needed AI that helped them recognize what to look for, then enabled them to act on their own conclusions.
Our analysis revealed three critical gaps in the platform:
Users struggled to identify meaningful patterns across 23+ individual check failures. The interface presented data as isolated items rather than grouped insights. Integrity officers spent more time parsing the list than investigating the actual issues.
Key insight: Professionals are trained pattern recognizers, but our AI was presenting noise instead of signal.
The platform delivered verdicts ("FAILED 23 checks") rather than surfacing patterns for professional assessment. Publishers needed AI that helped them see what they might have missed, not AI that told them what to conclude.
Key insight: In high-stakes environments, professionals reject AI that claims certainty.
The manual process of reviewing findings, grouping related issues, creating investigation pathways, and deciding next steps created significant burden. This became especially problematic when dealing with multiple issues, forcing users to create their own systems for managing investigations.
Key insight: Detection without investigation support doesn't solve the publisher's actual problem.
The path from detection to action wasn't straightforward. While our integrity checks successfully identified potential misconduct, publishers were still spending days investigating issues, switching between multiple tools, and manually tracking their progress. Our journey to solve this began with a collaborative design sprint that brought fresh perspectives to the problem, followed by intensive design work to transform initial concepts into scalable solutions.
Through this process, we discovered that effective actionability isn't just about adding buttons or features - it's about understanding the complex decision-making process publishers go through when investigating integrity issues, and creating tools that support each step of that journey.
Rather than rushing to solutions, we adopted a collaborative design sprint that brought fresh perspectives to the problem. With backing from our CTO and Product VP, we assembled a focused team to explore how we could transform Integrity Manager from a detection tool to an investigation platform.
I led the sprint alongside another designer on our platform team, Leandro. Bringing our unique perspectives as Senior Product Designers deeply familiar with Morressier's product ecosystem, our combined experience across various Morressier products gave us comprehensive insight into how integrity checks could be better integrated and made more actionable. Thomas Fortmann, our Staff User Researcher, provided crucial support in conducting both internal and external usability tests. His expertise helped ensure we were gathering meaningful feedback throughout the sprint. Mădălina Pop, the Integrity Team Product Manager, took on the critical role of Decider, keeping us aligned with our sprint goals and making key decisions when needed.
Getting started with the sprint, we sorted through questions and ideas that needed organizing. The first step was writing everything down as "How Might We" (HMW) questions. We then voted on which ones felt most important to tackle:
Through this process, we discovered that effective actionability wasn't about adding buttons or features. It was about understanding how publishers think when investigating integrity issues and creating tools that support each step of that journey.
After nailing down our goal, we got straight into the work. Here's how we broke it down:
We kicked off by digging into all the client feedback we had. This was crucial - we needed everyone on the same page and properly informed before we started throwing around solutions. Having this solid foundation made it way easier to focus on the real issues and map out proper processes.

Before jumping into sketching, we took a look at our design backlog to see if there was anything we could learn from previous work. Then, with fresh inspiration, we dove into sketching sessions. When it came time to vote on which sketches to develop further, we focused on ideas that could give users both helpful context and clear actions to take. We needed to separate the must-haves from the nice-to-haves - keeping our sprint goal front and center helped with this. We brought all these pieces together in a solution assembly session with the whole sprint team, piecing together a complete solution based on the sketches we'd picked.

By day 4, we were ready to start prototyping. Leandro and I took turns building and testing the prototype, while also getting the usability testing scripts ready.
We wanted to stay flexible and prepared for the next day's tests, making sure we could keep gathering useful data as we went. To get balanced feedback, we spent two days testing the prototype with six different people:

A key part of our sprint focused on defining what true actionability would look like in the platform. We weren't just thinking about random actions - we needed a cohesive system that would support publishers throughout their investigation process.
Here's what we came up with:
Instead of just showing individual check results, we designed a way to combine related checks to tell a bigger story. For example, by looking at over-citation, duplicated references, and self-citations together, we could indicate potential citation manipulation. This helps users quickly understand the bigger picture of what's happening with a paper.
We created a way for users to mark flagged checks as "acknowledged" after they've looked into them and decided they're not actual problems. This helps teams keep track of what's been reviewed versus what still needs attention.
We developed an activity log that shows all actions taken within a report - who acknowledged checks, who shared the report, and any other interactions. This creates a clear audit trail of the investigation process.
We added the ability to share reports with team members, making it easier for publishers to collaborate on investigations rather than working in isolation.
We created a clear endpoint for investigations where users can mark a paper as either acceptable or unacceptable after reviewing all checks and insights. This decision directly impacts whether the paper moves forward in the publication pipeline.
These solutions were designed to work together, creating a smooth workflow from initial check to final decision.
After completing our tests, we dug into the patterns and insights that emerged. Here's what we learned:
Copy turned out to be a huge part of the Integrity experience - it's what drives users to take action, especially for power users like our target audience. This was particularly important in workspace pages and permission settings. Even in features that tested well, like email issue reports, the copy needed to be solid to build trust and get users to act.
We found out that simple actions like sharing a report or marking an issue as resolved were more complex than we thought. For example, sharing integrity results turned out to be pretty sensitive - we had to be careful about who gets access to maintain research integrity. The activity logs were a hit though, as they helped multiple reviewers stay in sync about which issues were already handled.
As designers, we naturally want to optimize everything. We thought grouping checks with recommendations would make things clearer and help users focus on tasks. But users proved us wrong - they actually preferred seeing all checks at once! Sometimes simpler really is better.
The analytics dashboard taught us something important: pretty data isn't enough. Users wanted to:
They also expected guidance on using the data and wanted contextual details like manuscript IDs and journal names to help their investigations. Without these practical tools, users were less likely to take action on what they found.
These findings didn't just shape our immediate designs - they set the foundation for how I would develop the concept further after the sprint.
After the sprint, work continued with just me and the Integrity team. While the sprint had given us great insights with the wider group, it was now time to focus on detailed implementation with my direct team. Our first priority was clear: transform our dashboards from simple data displays into interactive tools for investigation. The sprint had shown us that users needed more than just pretty visualizations - they needed quick ways to dig deeper into the data.
When users see that 22 papers have plagiarism issues, their next thought is "Which papers are these?" We redesigned our analytics to answer this question instantly. Every data point became a gateway to more detailed information:
We also tackled the challenge of scale. Publishers needed to investigate issues across different organizational levels:
For example, if a publisher is looking at Journal-level analytics and clicks on a plagiarism indicator, they can now see all affected papers across every volume of that journal. This hierarchical approach helps publishers spot broader patterns while maintaining the ability to investigate specific cases.
This development phase was about more than just adding clickable elements - it was about creating a natural flow from identifying problems to investigating them, making each piece of data a starting point for action.
Following our work on analytics, my team and I tackled one of the more nuanced challenges revealed in our sprint testing: the check acknowledgment feature. What seemed like a simple feature initially turned out to be quite complex when we dug deeper into user expectations and needs.
During our sprint testing, we'd discovered something crucial about how language affects user behavior. The term "acknowledge" was creating unexpected confusion among our testers. Some thought they were marking false positives, others believed they were just indicating they'd seen the issue, and some thought they were validating the check's result. This ambiguity was particularly problematic in research integrity, where precision and clarity are essential. We realized we needed to do more than just change a few words - we needed to rethink how we communicated these actions to align with how publishers actually make decisions about research integrity issues.
The above insight led us to completely redesign our action system. Instead of a single "acknowledge" button, we created a more flexible, bi-directional system that better reflected the nuanced decisions publishers make.
We also added the ability to reverse these decisions with "Revert to Issue" and "Revert to Acceptable" options - because we learned that investigation findings can change as new information comes to light.
To ensure these decisions were well-documented, we made it mandatory to provide a reason for each action. This wasn't just about collecting data; it was about creating a clear record of the decision-making process that would be valuable for future reference.

The final piece of this puzzle was making all these actions transparent and trackable. We developed a check-level activity tracking system that appears the moment any action is taken. Rather than hiding this information in a separate section, we made it immediately accessible through an activity icon that appears right after the first action on a check.
This approach created a natural flow: take an action, see the activity icon appear, click to review the history. Every decision, every status change, and every provided reason is captured chronologically, giving teams the full context they need to understand how and why decisions were made throughout the investigation process.

Through this redesign, we established four principles for AI in high-stakes professional environments:
Present findings as "indications of concern" not "failures." Preserve professional authority.
Group findings into 3-5 meaningful categories rather than 23+ exhaustive items.
Provide tools that help professionals investigate patterns, not just identify them.
Show how AI reached conclusions so professionals can verify or override suggestions.
These principles apply beyond publishing: medical diagnosis, legal review, financial compliance basically anywhere professional judgment matters more than AI certainty.
Our focus on suggestive disclsure has redefined how Integrity Manager serves its users. Starting with a design sprint that revealed key insights, we transformed static alerts into interactive tools for investigation. By making data meaningful and actions clear, we've created a platform that doesn't just tell publishers what's wrong - it helps them make informed decisions about how to make it right.
Our journey from the design sprint through the development phases has already shown significant impact, even as we continue to evolve the platform.
6 closed deals with medium and large publishers
3 New Enterprise Prospects in Pipeline
Increased user engagement with integrity reports through actionable interfaces
Award for Innovation in Publishing by Association of Learned and Professional Society Publishers (ALPSP) 2024
Our focus on actionability has transformed how publishers interact with integrity checks:
What started as a design sprint exploring basic actions evolved into a deeper understanding of how publishers work with integrity checks. We learned that effective integrity management isn't just about finding problems - it's about guiding users confidently through the resolution process.
Key Takeaways: