View transcript
Moderator: Welcome, everyone, and thank you for joining today’s webinar, Pinnacle 21 Community or Enterprise? How to Decide Which Is Right for Your Study. In just a few minutes, my colleague Jen will take us through some decision criteria related to study phase, data complexity, team needs, and more. Meanwhile, all attendees will remain on mute for the next 45 minutes or so to ensure a clear audio experience. If any questions come to mind, please enter them in the Q&A panel, and we will address as many as we can today. If we don’t get to your question, we’ll certainly reply by email in the next few days. Rest assured, we will provide all registrants with a recording of today’s webinar so you can review the material or share it with colleagues.
Moderator: Now, let me introduce my colleague and today’s speaker. I’m joined today by Jen Manzi, Associate Director and Solutions Consultant for Pinnacle 21. She brings a wealth of experience from prior roles at Wyeth, GSK, and Merck. She’s one of many in-house experts on standardization, clinical programming, and validation with regard to both CDISC standards and our customers’ proprietary standards.
Moderator: Before I hand things over to Jen, a few words about Certara. We have a bold mission, and that’s to support better and faster drug development through scientific expertise, biosimulation, and advanced software. Our technologies are used by more than 2,000 companies and by 23 global regulatory agencies. Our company is comprised of more than 1,500 employees and 430 PhDs. We’re proud to say that our tools have supported 90% of FDA-approved novel drug approvals since 2014.
Moderator: Here, you can see the universe of Certara solutions. They are both service- and software-based. These support our customers from molecule to market, and despite the boxes separating them, they really do work in concert, with the outputs of one technology often serving as the input for an adjacent one. That is certainly true with Pinnacle 21, where we feed downstream into our solutions for regulatory writing and submission.
Moderator: With that stage set, I am pleased to give the floor to Jen. Jen?
Jen Manzi: Welcome, everyone, and thanks for attending. Assuming most of you have played video games before, think about playing Super Mario Bros. or really any video game. In the beginning, things usually feel pretty manageable. You learn the basics, get through the early levels, and the strategy that works at the start is often enough to keep moving forward.
Jen Manzi: But as you continue, the game changes. The levels get harder. The timing gets tighter. There are more obstacles, more moving parts, and a lot less room for error. At that point, you are leveling up. That is very similar to what happens in clinical data validation as studies move from Phase 1 to Phase 2 and beyond.
Jen Manzi: In Phase 1, the workflow can feel pretty manageable. There are fewer domains, fewer dependencies, and fewer people involved. But as studies grow, the process starts to look very different: more data, more cross-domain relationships, more review cycles, and more pressure to keep everything aligned and submission-ready. So today, we’re going to talk about what happens when your study is leveling up, but your validation process has not leveled up with it.
Jen Manzi: Now that we’ve set up the idea of leveling up, let’s look at where we’re headed today. First, I’ll talk about what changes between Phase 1 and Phase 2, because that shift is really the foundation for the rest of the discussion. Then we’ll look at where validation workflows start to break down as complexity increases. After that, I’ll walk through a few practical scenarios that highlight the common pressure points teams face. From there, we’ll talk about what is actually needed to scale validation in a sustainable way. Finally, I’ll compare Pinnacle 21 Community and Pinnacle 21 Enterprise so you can think about which approach best fits your study environment, your team structure, and your submission needs.
Jen Manzi: Let’s start with what changes between Phase 1 and Phase 2, because that shift is really the foundation for the rest of the discussion. What changes is not just the amount of data. The study also becomes more connected, with more domains, more stakeholders, and more dependencies across the process. This is why validation starts to feel different as studies scale.
Jen Manzi: Before we talk about where workflows break down, let’s first look at the high-level differences between Phase 1 and Phase 2 and beyond. At a high level, the differences between Phase 1 and Phase 2 are easy to recognize. Phase 1 studies typically involve fewer subjects, fewer sites, and a narrower set of domains. By Phase 2, the scope expands. Subject count increases. Studies are often multi-site and sometimes global. The data package grows to include additional SDTM domains and even more cross-domain relationships.
Jen Manzi: The studies themselves also tend to run longer, which means more validation cycles over time. So the challenge is not simply more data. It’s more complexity across the study, including more stakeholders, more dependencies, and more opportunities for inconsistency. That creates a greater need for visibility into what has been validated, what still needs attention, and how issues are being resolved.
Jen Manzi: This is a really key point for the webinar. Validation complexity does not increase in a simple linear way as studies become more complex. It grows much faster as the number of datasets expands and more findings are generated. In Phase 1, you may be working with around eight datasets and a narrower validation scope. By Phase 2, that may grow to 25 or more datasets, with more findings and a broader set of cross-domain relationships to review. So the challenge is not just that there is more to validate. The relationships, dependencies, and review effort all increase together. This is often the point where teams begin to feel that validation is becoming much more difficult to manage with the same workflow.
Jen Manzi: Before going further, I want to make one point clear. This is not about saying that Pinnacle 21 Community is the wrong tool. Community can work very well for smaller studies and simpler workflows. The issue is that as study complexity increases, the workflow around validation often becomes harder to manage. Teams may start to run into disconnected processes, manual tracking, limited collaboration, more manual metadata work, and less visibility into the history of what has happened. At that point, validation becomes more than just running checks. It becomes a process of managing findings across people, reruns, and submission-related deliverables.
Jen Manzi: So far, we’ve been talking at a high level about how validation becomes harder to manage as studies grow. Now I want to bring that to life with a few practical scenarios. These are the kinds of situations that help show where pressure starts to build in real day-to-day work and why the workflow that felt manageable earlier may start to break down.
Jen Manzi: One of the first places teams feel the impact of greater study complexity is in the sheer volume of findings. In Phase 1, a run might produce 50 to 100 findings, which is often still manageable to review manually. But by Phase 2, that can grow into hundreds or even more than a thousand findings across many more datasets. At that point, the challenge is no longer just reviewing one Excel validation report. It’s managing multiple versions of those reports over time, comparing what changed, carrying comments forward, and keeping track of what still needs attention across reruns. So the issue is not only volume. It is the manual effort required to manage the volume over time.
Jen Manzi: The next place complexity tends to show up is metadata and Define.xml. In earlier studies, that may still feel manageable, but as the study grows, teams are often dealing with more datasets, more variables, and far more metadata to maintain, including value-level metadata, methods, comments, codelists, and controlled terminology. That creates more opportunity for inconsistency and manual error. Even small misalignments can lead to validation findings, rework across teams, and gaps between the data and the submission documentation. By this point, metadata quality becomes a major part of submission quality.
Jen Manzi: The last scenario is the broader workflow itself starting to break down. A very common pattern is to run validation, review the Excel output, send emails, assign work manually, and then repeat the entire cycle again after the next rerun. In a smaller study, that may still be workable. But as the study becomes more complex, that process starts to create version-control issues, duplicate effort, and limited visibility across the team. The challenge is that the workflow is no longer really carrying the process. People are carrying it through spreadsheets, emails, and memory.
Jen Manzi: After looking at these scenarios, the next question is really what it takes to scale validation in a practical way. By this point, it should be clear that the challenge is not just running validation. It is also making sure the broader process supports consistency, traceability, documentation, and submission readiness as the study grows in complexity. So from here, I want to shift into what teams typically need in order to make both validation and submission preparation more sustainable, more consistent, and easier to manage over time.
Jen Manzi: When studies move into Phase 2 and beyond, what teams really need is not just more validation. They need the right operational foundation around it. At a high level, that usually comes down to four things: centralized validation output, integrated issue tracking, metadata-driven processes, and auditability. These are the capabilities that help teams manage complexity in a more consistent and sustainable way while also better supporting submission readiness.
Jen Manzi: The first requirement is centralization. Without that, teams often end up with multiple Excel spreadsheets from different users who each have multiple versions of the spreadsheet, and limited visibility into what has already been reviewed by the entire team. That can create confusion, duplicate effort, and a lot of wasted time. Centralization helps by bringing validation results into one place so the team has a more consistent view of what is happening. It creates a clear source of truth and makes collaboration much easier as the study becomes more complex.
Jen Manzi: The next requirement is issue tracking, because validation does not stop once findings are generated. Those findings are issues that need to be reviewed, assigned, discussed, and followed through across validation reruns. Without a way to track that in one place, teams can lose context, rely too heavily on manual assignment, and end up repeating the same work. Issue tracking helps keep the process connected by preserving the history, showing where things stand, and making it easier to manage findings from one validation run to the next.
Jen Manzi: The next requirement is metadata-driven processes. As studies become more complex, teams are not only managing more datasets. They are also managing more variable metadata, value-level metadata, methods, comments, codelists, and controlled terminology. When these pieces are handled manually or in disconnected places, inconsistency becomes much more likely. A metadata-driven approach helps create better alignment across the data, the metadata, and the submission documentation. That improves consistency, reduces manual rework, and better supports submission readiness.
Jen Manzi: The next requirement is auditability. As studies become more complex, teams need visibility into what has happened with an issue over time, including comments, assignments, status changes, and updates across reruns. Without that, people are often reconstructing the history from separate validation files and emails. Having this visibility helps teams understand where things stand, work from current information, and manage the review process more continuously.
Jen Manzi: The key takeaway is that complicated study designs introduce structural complexity. That changes how validation needs to be managed. At that point, validation becomes more than a technical check. It becomes an ongoing process that requires standardization, centralization, and collaboration. Those are the capabilities that help teams stay aligned as studies scale.
Jen Manzi: Now that we’ve talked about what changes as studies scale and what teams need in order to support that complexity, the next step is to connect those needs to the validation approach itself. This is where the comparison between Pinnacle 21 Community and Pinnacle 21 Enterprise becomes helpful. Pinnacle 21 Community is file-based and generally supports more individual workflows, while Pinnacle 21 Enterprise is centralized and designed for a multi-user environment. Neither is inherently right or wrong. They’re designed for different levels of complexity and different workflow needs. So the question is really about fit: which approach best supports the way your team needs to work?
Jen Manzi: This table summarizes the functional differences we have been discussing. Pinnacle 21 Community executes validation locally, while Pinnacle 21 Enterprise centralizes it. Rule consistency in Community can depend on versioning and local setup, whereas Enterprise provides a more standardized environment. Community is spreadsheet-based, while Enterprise provides an interactive system. Collaboration in Community is mostly manual, while Enterprise supports multi-user work in one place. Metadata handling is more manual in Community, while Enterprise integrates metadata more directly. Community offers limited visibility into issue history compared with the fuller traceability available in Enterprise. Each of these differences ties back to the pain points we just covered, including version confusion, repeated manual work, limited shared visibility, and difficulty carrying context across reruns.
Jen Manzi: This is where the workflow difference really comes into focus. In a manual, file-based process, teams run validation, review the Excel report, send emails, track follow-up separately, and then go through the same cycle again with each rerun. In a more connected workflow, these activities stay together. Validation, review, issue tracking, and follow-up all happen in the same environment, and the history carries forward over time. That means teams spend less effort reconstructing context and more time actually moving the work forward.
Jen Manzi: This is where operational impact becomes easier to quantify. When validation, issue management, and documentation are handled in a more connected way, teams can reduce a significant amount of manual effort. The time savings are not limited to running validation alone. They also come from managing issues more efficiently and reducing the work needed to prepare submission documentation. So the overall value is less time spent across the study life cycle, not just within a single validation run.
Jen Manzi: Beyond time savings, the real impact shows up in how the work gets done. Faster issue resolution means teams can address problems earlier and keep the process moving. Improved data consistency helps reduce avoidable rework and supports a stronger submission package. Better inspection readiness comes from having a process that is more organized, more traceable, and easier to explain. So the value is not just efficiency. It is also lower operational risk and greater confidence in the quality of the submission.
Jen Manzi: Now let’s move from the high-level comparison into what this actually looks like for the user. Up to this point, we’ve been talking more about workflow and process. From here, I want to make the differences more concrete by showing how Pinnacle 21 Community and Pinnacle 21 Enterprise compare in day-to-day use.
Jen Manzi: Let’s start with validation in Pinnacle 21 Community. In this workflow, each validation run produces a static spreadsheet of findings. That means the reviewer typically has to open the file, research the issues, add comments, and then use separate communication channels to assign or discuss follow-up. From the beginning, the output is already detached from the broader workflow. It is useful, but it is static. Because it is static, a lot of the management activity around those findings has to happen somewhere else. That becomes the foundation for many of the challenges we discussed earlier, especially as more users and more reruns are involved.
Jen Manzi: This is where the pain really becomes obvious. Each time you revalidate in a spreadsheet-driven process, you get another static output file. The new file does not inherently preserve the comments or resolution context from the prior one. Now the team is bouncing between multiple spreadsheets and email threads trying to understand what changed, what was already addressed, and what still needs action. That is why I jokingly call it Excel madness. But for many teams, it is a very real operational problem. The amount of effort spent managing the artifacts starts to compete with the effort spent actually resolving the issues.
Jen Manzi: By contrast, in Pinnacle 21 Enterprise, the issues are maintained in the system. Instead of handling each validation run as a separate, disconnected run, the issues and findings live within a central environment where users can filter, search, and review them in context. That alone changes the experience significantly. You are no longer relying on a static file as the primary place where work happens. You are working within a system that is meant to support the life cycle of these issues and findings. That becomes especially valuable when the number of issues increases or when multiple people need to access the same current view of the validation state.
Jen Manzi: Another practical difference is that Pinnacle 21 Enterprise keeps all comments, assignments, and issue status in one place. When you revalidate, you are not starting with a blank spreadsheet and trying to piece together what happened in the last round. The system keeps the full history with the issues and provides an audit trail that tells you the story of what happened with that issue over time. You can see the comments, who it was assigned to, status changes, and other updates as the issue moves through review and resolution. That makes it much easier to understand where things stand, work from current information, and keep the review process moving continuously rather than relying on email and separate tracking tools to fill in the gaps.
Jen Manzi: Pinnacle 21 Enterprise also helps make issue resolution more efficient by providing fix tips and explanations within the system. Pinnacle 21 Enterprise includes fix tips for top-firing issues, which can guide users toward what to review and help save time in that review process. Users can also create their own fix tips and explanations, allowing teams to standardize how they handle and document the issues based on their own processes. The result is a more efficient workflow, greater consistency, and higher-quality submissions.
Jen Manzi: It’s also worth noting that Pinnacle 21 Enterprise still allows users to export the validation report to Excel when needed. So even though the workflow is centralized, teams can still work with a familiar format when it is helpful. The key difference is that the system remains the primary source of truth, while the Excel version is available as an additional option. That gives organizations stronger governance without taking away flexibility.
Jen Manzi: Pinnacle 21 Enterprise also gives users in-system access to CDISC standards and controlled terminology for reference. Instead of having to go somewhere else to look up NCI codes or related standards information, users can access that directly within Pinnacle 21 Enterprise. Organizations can also store their own versions of standards and terminologies in the system and validate against them as well. Community does not offer that same built-in reference capability, which means users need to manage that information outside the tool.
Jen Manzi: Enterprise also gives you scores and metrics that help you understand the current state of the validations in your data package. That includes visibility into things like how many issues are still open, how much progress has been made, and where the biggest problem areas may be. Most importantly, Enterprise provides a data fitness score, which gives you an indication of how compliant the data are for submission. Not only does it give you a score, but it gives you specific ways to increase that data fitness score for that particular data package. This makes it easier to prioritize the work that will have the greatest impact on submission readiness.
Jen Manzi: Now let’s talk about Define.xml. In Community, Define.xml creation starts by generating an Excel specification from the validation metadata. That provides a starting point, but it is not the finished Define.xml. Community mainly populates the dataset and variable metadata, and then the remaining parts of the Excel specification still need to be completed manually before the Define.xml can be generated. The value here is that it helps teams get started, but there is still manual work required to finish that specification and then the Define.xml.
Jen Manzi: Enterprise expands the Define.xml process quite a bit. The Define Designer within Pinnacle 21 Enterprise gives you many ways to create your Define.xml so that it fits into your organization’s processes. You can create it from a standard, copy from another study, import Excel specifications or XML files, or start with a blank template. But the biggest time saver is using the validation metadata to create the Define.xml. Most of the Define.xml will be completed for you in this case. It will even add value-level metadata, codelists, and terms with NCI codes. That very manual task of adding the page numbers from the annotated CRF into the Define.xml can also be handled by Enterprise with a click. On top of that, the Define Designer in Enterprise gives you version control and the ability to compare against prior versions or other Define.xml files in the system. The overall value is not just speed. It’s a more efficient, more consistent, and more manageable process.
Jen Manzi: The reviewer’s guide is another important differentiator. Community does not provide a way to generate the Study Data Reviewer’s Guide, or SDRG, so that process has to happen outside of the tool. Pinnacle 21 Enterprise can generate the SDRG and populate a lot of it for you, including the issue summary table, which is often a very long manual task. This is a huge time saver.
Jen Manzi: Finally, Enterprise includes a suite of reports that helps organizations better understand and manage their portfolio, which is something Community does not provide. Instead of only seeing what is happening in one study, teams can look across studies to spot trends, recurring issues, standard usage, and other broader quality patterns. That gives organizations better visibility at the portfolio level and supports more informed process improvements over time.
Jen Manzi: As a final takeaway, think back to the video game example we started with. In Super Mario Bros., the early levels feel manageable. But as you keep moving forward and leveling up, the game changes. The obstacles become harder, the path becomes more complex, and the approach that worked at the beginning is not always enough later on. That is exactly what happens as studies move from Phase 1 to Phase 2 and beyond.
Jen Manzi: Validation becomes more than just running checks. It becomes a broader process of managing issues, metadata, documentation, collaboration, and submission readiness. Community may still be a strong fit for simpler workflows. But when study complexity levels up, many teams need a validation approach that levels up with it. So the real question is not which product is better overall. It is which approach best fits the complexity of your study and helps your team stay efficient, aligned, and submission-ready.
Moderator: Thank you, Jen. Fantastic exploration between the two solutions and some decision criteria. It’s not easy to know where to draw the line, but you certainly gave us a lot to think about. Not surprisingly, we did have a number of questions come in. If you’ve got 10 minutes or so, I’d love to get your answers.
Moderator: I’ll start with this: Is there a difference in what types of data can be validated in Community versus Enterprise? Are there any differences between SDTM and ADaM capabilities in those two?
Jen Manzi: In Community, you can validate SDTM, ADaM, and SEND datasets. But in Enterprise, you can validate SDTM, ADaM, SEND, and BIMO. BIMO is the extra one that you get in Enterprise.
Moderator: Jen, what should an organization have in place before adopting Enterprise?
Jen Manzi: All you would really need is some datasets that have been transformed into SDTM or ADaM. Then you can put them in the system and validate. It’s a really user-friendly system, so there is not much required. It is not that difficult to use at all.
Moderator: Excellent. How early do you realistically expect teams to start thinking about submission readiness? Is that something we should be building in from day one?
Jen Manzi: 可以。As soon as you can, start thinking about that, even from the path backward to your collection. As far as validation is concerned, as soon as you have transformed datasets, start thinking about submission readiness. Validate as early and often as possible, and you’re good to go with solving those issues.
Moderator: Do you need to have all your data in, or is it valuable to run a validation even with just some data to get in the habit?
Jen Manzi: No, you can run with as many datasets as you have, as little or as many as you have. That way, you’re catching those issues early and up front.
Moderator: Excellent. Can different reviewers work on the same study at the same time in Enterprise?
Jen Manzi: Yes, that’s one of the benefits. You are working from the same issues list in Enterprise. There is no single spreadsheet like there is in Community. You’re working from the same list of issues, and you can have many different people in it at the same time. You can see each other’s comments and things like that.
Moderator: Here’s another one that came in: How do you drive adoption with teams that are very comfortable with Community?
Jen Manzi: Once you really see what Enterprise gives you and you’re in Enterprise, it’s pretty hard to go back to Community. I know we get used to working in Excel spreadsheets in this industry. In Enterprise, there are parts of it that look like Excel, and you can even still export your issue report to Excel if you want to see it that way.
Moderator: So Community gives you a skill set that’s going to transfer very well to Enterprise.
Jen Manzi: Yes, exactly. In Community, you kind of need to look at your spreadsheet and try to figure out yourself what to do with the issue. There is no way to really prioritize the issues. You have that in Enterprise. In addition, you have the fix tips that Pinnacle 21 Enterprise has for you to kind of point you in the right direction on how to resolve the issues. It is like the Cadillac version of resolving validation issues.
Moderator: Thank you. I think you touched on this in your presentation, but with Community, you have a very valuable one-time snapshot, and the onus is on the user to sort of mentally reconstruct what might have changed, whereas Enterprise has that perfect memory, correct?
Jen Manzi: Yes, you can see all of the comments and everything that has happened to that issue over time throughout the life of your study.
Moderator: In addition to validation, there are other submission deliverables, like Define.xml. We did have a question about that particular file. How much of the Define.xml can Enterprise generate automatically from the validation data?
Jen Manzi: Using the validation metadata, it can create about 80% of the Define.xml for you. It also has in-cell validation, so it points you to the spots that you kind of need to look at. Then you would import your annotated CRF, and it automatically adds in the page numbers from that for you. That’s very helpful.
Moderator: You mentioned the support for the reviewer’s guides. How much of that is automated, and where does the user have the flexibility to modify if needed?
Jen Manzi: The reviewer’s guide is generated from the PhUSE template, and we add in what we can. The big benefit to the reviewer’s guide is really that, if you are holding your explanations in Enterprise, you have a completed issue summary table. That’s the huge time saver there. There are different parts of it that it can fill out, like your standards, dictionaries, and subject-level domains, and that type of thing. Then you would download it and add in any pertinent study information that Enterprise isn’t going to know to add.
Moderator: And you can generate a new copy of the reviewer’s guide at any time.
Jen Manzi: 可以。That issue summary table can take a lot of time to produce, so it is a huge time saver there.
Moderator: Jen, I have a question here about formats. We know we can look at SDTM, ADaM, and SEND, but what formats can be validated in Community versus Enterprise?
Jen Manzi: I believe both of them are the same. In Enterprise, your ZIP file needs to contain XPT, CSV, SAS, Excel, or even JSON files. We’re covering all the common formats we would see day in and day out.
Moderator: A question here about controlled terminology: Can you validate using custom controlled terminology in Enterprise?
Jen Manzi: You can. What’s nice is that Enterprise has the P21 engine, and when you’re validating using that engine, you can validate against custom standards and controlled terminology.
Moderator: Perfect. This is a great one to end on, Jen, because this user might be preparing to migrate some of their studies. Are there common challenges teams face when moving from a spreadsheet workflow, which Community does enable, to Enterprise?
Jen Manzi: I really just think it is the change of going into a new system. But again, it’s very user-friendly, and it is just learning how to be in the system and sometimes not send emails. Being able to assign issues in the system is great. I know that, in the spreadsheet workflow, you kind of have to send emails or send the spreadsheet around, even to data managers. We always say to give your data managers access to Enterprise so you can assign them issues right in Enterprise.
Jen Manzi: I think it is really just getting used to a new process around things. But once you see how streamlined everything is and how much easier it is than handling all of those different versions of the spreadsheets, it’s not going to be a big deal. Other than that, I really can’t think of any challenges with that.
Moderator: Excellent. Well, thank you. If we didn’t get to your question, we will email a response in the next day or two.
Moderator: In the meantime, I want to thank Jen for her time and all of you for your engagement and questions. I have on the screen here some other resources. There is a lot of education to be had on your own time by visiting Pinnacle 21 on the web and the Pinnacle 21 Help Center. With that said, I will let us all go, but rest assured I will send a link where you can watch this on demand at your leisure.
Moderator: One more final thing: As soon as I end this recording, there will be a short survey. Please do provide your response. Your feedback is what guides us toward delivering more education like this.
Moderator: Thank you, Jen. Thank you all, and have a great day. Bye.
Jen Manzi: Thanks, everyone.

Jen Manzi
Subject Matter Expert and User Advocate, Pinnacle 21 by CertaraJen Manzi is a Subject Matter Expert and User Advocate at Pinnacle 21. She has over 20 years of Pharma/Life Sciences industry experience in Clinical Trials and Safety Data Management. Jen has held various roles within these areas, including eCRF Programmer, SDTM Delivery Lead, Product Owner and Programmer of Batch Processes, Vendor Relationship Manager, Program and Process Improvement Manager, and Validation Lead.
Make an inquiry about Pinnacle 21 Enterprise
Pinnacle 21 Enterprise builds on Community’s core validation capabilities with:
Make an inquiry to learn how your organization can benefit from reduced risk, increased quality, and guided submission readiness with Pinnacle 21 Enterprise.



