Join us at Boomi World 2025 May 12-15 in Dallas, TX

Resolve Agent: A Technical Approach to Revolutionizing Troubleshooting With AI

by Nikhil Kalyankar
Published Feb 17, 2025

For over a decade, Boomi Resolve, as a part of process reporting, has helped users troubleshoot and resolve errors by showing relevant articles for process errors and connecting them to helpful resources from Boomi Community pages. However, its legacy approach had significant limitations, prompting a much-needed transformation. This blog explores how Boomi leveraged generative AI technology to take Resolve to the next level of performance.

The Legacy Approach: Elasticsearch-Based Retrieval

The original Boomi Resolve relied heavily on an Elasticsearch database to fetch relevant articles using keyword-based searches. While this feature was functional and innovative for its time, it faced several challenges that hampered its overall effectiveness:

  1. High Costs: Maintaining an Elasticsearch infrastructure proved expensive. These costs became increasingly difficult to justify as the feature’s effectiveness waned.
  2. Low Relevance: The feature could only provide results for a small number of error types. Often, the results were not comprehensive or actionable enough for users.
  3. Stagnation: The feature was not significantly refined or enhanced over the last decade to leverage newer technologies.
  4. Outdated Data: The data stored in the Elasticsearch database often became outdated or irrelevant over time. This required manual updates and curation by administrators, with no automation in place to ensure the information remained current.

The legacy feature was designed to find error messages similar to those submitted by users. It revolved around finding similar unique errors that is to locate either the single best match or multiple matches. The process worked as follows:

  • Query Construction: The feature built a customized Elasticsearch query, breaking down the user’s error message into individual words. Specific rules were applied, such as requiring a minimum 75% overlap between the words in the error stack and the database entries. This ensured that only closely related results were considered.
  • Scoring System: The feature used a custom scoring mechanism. This scoring method prioritized matches based on factors like message length and word count, ensuring that more complex and meaningful matches were ranked higher.
  • Result Evaluation: Once search results were retrieved, each one was evaluated for exactness. It tracked the closest match to get the most relevant result.

While this feature aimed to strike a balance between precision and flexibility, it was ultimately limited by the constraints of traditional Elasticsearch-based search.

The New Era: Retrieval-Augmented Generation (RAG)

To overcome the limitations of the legacy feature, Boomi Resolve underwent a comprehensive transformation. The new approach leverages Retrieval-Augmented Generation (RAG), a cutting-edge framework that combines retrieval techniques with generative AI models. RAG retrieves the most relevant documents or articles for the given input and uses them as context to generate precise and insightful responses, ensuring both accuracy and relevance. The new approach combines state-of-the-art technologies to deliver a dramatically improved user experience, blending semantic search techniques with genAI capabilities.

Expanding Resources: A New Scale of Support

The transition from the legacy feature to the new Resolve Agent has not only improved the accuracy and relevance of search results but also vastly expanded the available resources. While the legacy system indexed only 46 articles, the new solution encompasses approximately 150 times more resources, leveraging a wealth of documentation, community articles, and other data sources. This increase allows users to access a significantly broader and more comprehensive range of solutions, further enhancing the troubleshooting experience.

OpenSearch Indexing Job

The OpenSearch Indexing Job ensures that the feature remains up to date with the latest help documentation, community articles, and other relevant sources. This batch job operates as follows:

  • Data Retrieval: The job periodically fetches content from multiple sources, including help documentation and community articles.
  • Markdown Chunking: Retrieved data is broken down into smaller, manageable chunks using a markdown chunking process. This ensures that each chunk is focused, enabling more precise search and retrieval.
  • Embedding Creation: The job uses the Amazon Bedrock Titan Embeddings model to generate vector embeddings for each chunk. These embeddings capture the semantic meaning of the text, making them ideal for advanced search capabilities.
  • Indexing in OpenSearch: The embeddings and their corresponding content are stored in OpenSearch indexes. This allows the semantic search feature to retrieve the most relevant articles efficiently during a query.

Semantic Search: Leveraging Error Stack Traces

One of the standout features of the Resolve Agent is its ability to leverage error stack traces generated during the execution of a Boomi process. These stack traces are rich with information, capturing the sequence of events leading to an error and providing critical context for troubleshooting. Here’s how this capability is integrated into semantic search to improve relevance:

  • Breaking Down Stack Traces: The feature parses the error stack trace into its constituent components, such as error class, descriptive messages and rest of the stack.
  • Contextual Matching: Semantic search algorithms use the stack trace to match with articles that address not just similar errors but also the broader context of those errors.
  • Enhanced Relevance: By focusing on the specific details within the stack trace, the feature minimizes irrelevant results and retrieves articles that align closely with the user’s issue.

Re-ranker: Fine-Tuning Search Results

The re-ranker model is a pivotal component in the new Resolve Agent, ensuring that users receive the most relevant and helpful results. Here’s how it works:

  • Input from Semantic Search: After semantic search retrieves an initial set of articles based on the user query, the re-ranker model processes these results to determine their relevance.
  • Feature Extraction: The re-ranker evaluates each result using a variety of features, including textual similarity to the query, semantic context from the stack trace, and metadata.
  • Scoring Mechanism: Each article is assigned a relevance score using machine learning algorithms.
  • Reordering Results: Based on the scores, the re-ranker reorders the articles, ensuring that the most pertinent ones appear at the top of the list.

Generative AI: Providing Tailored Solutions

The most transformative aspect of the Resolve Agent is its integration with a generative AI large language model (LLM). Instead of simply linking users to articles, the LLM generates precise, actionable responses to their queries — so they get direct answers. This is particularly valuable for resolving unique or poorly documented errors. And, users no longer need to sift through multiple articles; AI provides clear and concise solutions within seconds.

AWS Architecture: Powering the AI Solution

The Resolve Agent leverages Amazon Web Services (AWS) cloud services to implement its semantic search, re-ranking, and generative AI solution. The architecture comprises several key components:

  1. AWS Batch: This service is used to run the Amazon OpenSearch Indexing Batch job. It manages the periodic retrieval, processing, and embedding of help documentation and community articles, ensuring the OpenSearch indexes are always up to date.
  2. Amazon OpenSearch Serverless Service: Serving as the backbone for the search functionality, Amazon OpenSearch integrates semantic search capabilities. This allows the feature to retrieve relevant articles quickly and efficiently. OpenSearch’s scalability and reliability make it ideal for handling real-time queries across large datasets, ensuring users get prompt and accurate responses.
  3. Amazon SageMaker Endpoint: The re-ranker model is hosted on an Amazon SageMaker endpoint. This model refines the results retrieved by Amazon OpenSearch, ranking them based on their relevance to the error. By employing advanced machine learning algorithms, the re-ranker ensures that the most contextually appropriate articles are prioritized.
  4. Amazon Bedrock with Claude 3.5 (Sonnet Model): Generative AI functionality is powered by Amazon Bedrock, which integrates the Claude 3.5 Sonnet model LLM. This model provides direct, context-aware answers, enabling the feature to deliver actionable insights even for rare or complex errors. Amazon Bedrock also ensures seamless scalability and access to future LLM advancements, keeping the feature future-ready.

Benefits of the New Approach

The adoption of Retrieval Augmented Generation (RAG) has transformed Boomi Resolve into an even more powerful and user-centric solution. Key benefits include:

  • Enhanced Accuracy: By integrating semantic search, a re-ranker, and generative AI, the feature ensures users receive the most relevant and actionable results.
  • Cost Efficiency: Transitioning from Elasticsearch to Amazon OpenSearch and other modern services reduces infrastructure costs while improving performance.
  • Improved Usability: The new feature provides both direct answers and access to related articles, streamlining the troubleshooting process and reducing user frustration.
  • Future-Proof Design: Built with adaptability in mind, the new architecture is ready to incorporate future advancements in AI and search technologies, ensuring its longevity and relevance.

 

Learn more about generative AI for integration and automation from Boomi on our Boomi AI webpages.

On this page

On this page

Stay in touch with Boomi

Get the latest insights, news, and product updates directly to your inbox.

Subscribe now