Image from eventmodeling.org
Event Modeling has emerged as a powerful method for designing Business Information Systems, ensuring alignment between business teams and development teams. It also enables a modular design approach, where independent teams can develop functional "slices" in parallel, shortening time-to-market at a fixed cost.
Although the strength of Event Modeling lies in its simplicity, mastering it can require significant effort and time. However, AI-powered tools are now transforming this process by generating a fully detailed event model from any workflow description in minutes. This accelerates system design while maintaining structure, clarity, and close collaboration with domain experts.
AI also has the potential to revolutionize the implementation phase, allowing code to be generated directly from an event or domain model.
At DDD Hungary, Staffan from Qlerify demonstrated how AI can streamline Event Modeling by automating both model creation and code generation. You can watch the first 45 minutes of the presentation in the video below. However, this article offers a more complete and up-to-date walkthrough.
This guide offers a step-by-step approach to Event Modeling in Qlerify, drawing on the core concepts from Adam Dymitruk’s original Event Modeling post—including the automation and translation patterns. We won’t detail all Qlerify features here, as you can find more information in our article on Event Storming and DDD (links provided at the bottom of the page) as well as in the in-app help texts.
We'll follow the seven steps of Event Modeling and recreate the example presented in the blog post 'What is Event Modeling?' While some visual elements, such as the "waterline," are presented differently in Qlerify, we've successfully applied Event Modeling in many cases using the approach presented here.
Now, let’s dive in. Before starting, ensure you're logged into Qlerify with a blank workflow open and review the following Card Type Settings:
You can also navigate to the AI tab and pick your LLM model of choice. For this walkthrough, we used OpenAI ChatGPT-4o.
Brainstorm state-changing events together with your human colleagues and AI. Use the prompt below or describe your own scenario to let AI assist in brainstorming.
The workflow is based on the hotel website described in Adam Dymitruk’s original 2019 Event Modeling blog post. It represents an Event Model for our hotel chain, enabling customers to book rooms online while allowing us to manage cleaning and other hotel operations. Include the following steps: 1) Guest registered an account. 2) Manager added a room. 3) Guest booked a room. 4) Manager prepared the room. 5) Guest checked in. 6) Coordinates sent from the guest's GPS. 7) Guest left the hotel. 8) Guest checked out. 9) Guest requested payment. 10) Payment succeeded.
In the empty workflow, click on 'Generate workflow with AI,' paste the prompt, and then click 'Generate workflow' using the default options.
Wait for the process to complete. You should now see something like this:
In this step, we'll review the timeline and ensure it creates a coherent story composed of events. This time, AI generated two swimlanes—GPS Device and Payment System—which seem to represent systems or bounded contexts rather than actors. In Qlerify, we typically use swimlanes for roles such as Guest or Manager rather than for systems or bounded contexts. Since these steps are automated (as we know from the blog post), we can consider Automation an actor.
From the blog post, we know that there are two more automated events: Left hotel and Checked out. Move these events as well to the new Automation swimlane. Now, our workflow looks like this:
Note: The arrows between events represent a plausible timeline but do not indicate that each event automatically triggers the next. They also do not enforce a strict sequence in which the events must occur. Instead, they provide an example of how events typically unfold within the organization.
Next, we will combine steps 3, 4, and 5 into one step for each event. In Qlerify, you can select an event and display the UI mockup (Step 3) together with the command name (Step 4) and the preceding read model (Step 5). This makes it convenient to cover all the steps simultaneously.
Think of the storyboard as a UI mockup of an input form with a submit button that the actor fills out and submits. In automated steps, imagine it as a robot filling out the form and pressing the submit button.
Step 4 involves naming the command that will be invoked when the form from Step 3 is submitted, whether manually or automatically.
In traditional Event Modeling, Step 5 is known as "Identify Outputs." We will instead refer to it as the Read Model and consider the read model as being consumed before the command is triggered.
Why before? Because it’s easier to reason about the data required upfront while designing the UI mockup. This approach also centers the discussion around a single actor. (Sometimes the resulting view of a command is targeted at a different actor.)
Note: In most cases, whether you view a read model as the output of a domain event or as the input for another is simply a matter of the order in which you build the model.
Note: Although one event typically has exactly one command associated with it, there may be multiple read models.
Now, let's proceed by applying Steps 3 through 5 to each event one at a time. We'll make one further simplification by combining Steps 3 and 4 into what we will call the Write Model. To get started:
The Guest registered account event is triggered by the submission of a regular input form that must be manually filled out by the guest.
Add a Read Model, click on + Read Model and select From Workflow with AI.
This step, like the previous one, uses a regular input form. However, it's tailored for the manager to add rooms to the booking system.
This step features another regular input form, designed for the room booking process.
Let's jump ahead to the Sent Coordinates event. This event is the first of two steps in integrating with an external system. Unlike previous events, it is not triggered by a human but is automatically initiated by the guest’s mobile device. Because this interaction occurs outside our system (unless we are also building the GPT Tracker), it functions as a black box.
For this event, we will delete the Write Model and not create a Read Model. Instead, we'll focus on the next step, where we receive this domain event in our context and determine whether the guest has left the hotel or not.
This event is automatically triggered by the preceding external event, Sent Coordinates, and does not have a visual input form. In Event Modeling, this step is referred to as a Translation. It takes the coordinates from the Sent Coordinates domain event and interprets them to determine whether the guest has left the hotel. If the coordinates indicate that the guest has left, the Left Hotel event is triggered; if not, the event is not triggered.
Now, we arrive at the domain event Checked Out Guest, an automated step that requires no manual input. As usual, add a Read Model using AI.
At this stage, the payment can be processed, and the customer is presented with a manual form to complete the transaction.
The input form should capture all necessary details for processing the payment, such as the card number, expiration date, and any additional required fields.
Note that in this simplified scenario, no explicit notification is sent to the guest. Our interpretation is that the payment form will become available the next time the guest accesses their booking.
The Payment succeeded step is automated similarly to the Checked out guest event, but with one key difference: in addition to reading a query and invoking a command, it involves an outgoing call to an external payment service provider. This call must succeed for the Payment succeeded domain event to be triggered. Although this outgoing call isn't explicitly modeled here, we'll describe the success criteria using a GWT (Given-When-Then) scenario.
Now it's time to organize the system's parts into autonomous components. This step is visualized slightly differently than in standard Event Modeling, although the underlying concept remains the same. Switch over to the Domain Model tab and assign bounded contexts to the Aggregate Roots.
Following the blog post:
With these assignments, you establish clear boundaries between decoupled parts of the system.
We have reached the final step of Event Modeling. Qlerify not only helps you write the GWTs but also lets you prioritize them into iterations, all while maintaining a complete view of the end-to-end flow and understanding how prioritization impacts it.
To proceed, navigate to the User Story Map tab under the workflow diagram. Here, you'll see each GWT lined up under its corresponding event. You can add additional GWTs using the sidebar or the button at the end of the page (make sure to select an event first).
Define the first iteration of your project by carefully selecting which GWTs should be assigned to Release 1. Notice that the selected GWTs move up into a separate horizontal section. You can also add additional GWTs to Release 2 and rearrange them by dragging and dropping once the releases have been assigned.
Next, apply a filter for Release 1 using the filter above the workflow diagram. This provides an end-to-end view of exactly which parts of the workflow are planned for the first iteration. This powerful view not only helps you understand the impact of your prioritizations but also serves as a valuable tool for discussing priorities with stakeholders to ensure everyone is on the same page.
This concludes the seven steps of Event Modeling. You have built a solid system model using best practices collected from decades of software modeling experience. From here, you can have your team start implementing the slices—each Read Model and each Write Model represents one slice. You can also complete the domain model by enriching it with Entities; see the guide on DDD linked at the bottom of this page. After that, let AI generate the code skeleton for you, as described in the article "AI Generated Code," also linked in the footer.
During the presentation, Staffan demonstrated the application of AI in event modeling using Qlerify. The key takeaways from the demo include:
Event modeling is a powerful approach to system design, and AI is making it more efficient than ever. By automating repetitive tasks—such as event identification, UI generation, and read model creation—AI enables teams to focus on refining business logic and system architecture.
As AI technology continues to evolve, its role in domain modeling will only expand. While AI cannot replace human decision-making, it serves as an invaluable assistant in accelerating event modeling workflows and reducing time-to-market for software projects.
If you're interested in exploring AI-assisted event modeling, sign up for a Qlerify account today (link in the footer). The future of system design is here, and AI is unlocking new possibilities for efficiency and innovation.