- Sherman's Newsletter
- Posts
- Integrate Generative AI into existing products
Integrate Generative AI into existing products
Step-by-step guide for your first AI pilot
Introduction
Last week, we discussed how to use web interface AI tools such as OpenAI ChatGPT, Google Bard, and Microsoft Bing Copilot to brainstorm new product ideas. No coding is required. By following the proper prompt guidelines, you can significantly boost your productivity. This week, we will cover how to leverage Generative AI for enhancing existing products. We will also discuss how to incorporate your proprietary data through APIs and Foundational Models to develop an application.
Where to start
Always start with a problem. When I was a product owner in the payment industry, I constantly asked the following questions:
End Users/Consumers: Who are targeted end users? How can we provide the right offers to meet their needs?
Customers: In the B2B world, you directly deal with customers like merchants, banks, etc. What concerns and pain points do they experience? Which areas require improvement?
Employees: What challenges do they face in getting things done? How can we improve efficiency and quality?
Obviously not all questions can be answered by Generative AI. So where will be a good place to integrate with Generative AI? You can start with questions as bellow:
Can you justify the value of the pilot project? Do we have domain expertise to guide the training? Do we have Machine Learning, Data Science resources? | Do we have proper data to train the model that power a solution? Can’t solve the problem with pre-programed rules? Do we have Machine Learning, Data Science resources? |
If the answers are “YES”, you are on the way to start a pilot.
What problem to solve
Let me give an example of how I came up with a conceptual pilot project.
When I was the product owner at Visa for a BNPL (Buy-Now-Pay-Later) product that enables existing cardholders to make installment payments at any merchant, I knew from experience that there are many issues related to documentation. A complex product usually has extensive technical and supporting documents such as onboarding, implementation, service guides, API specs, and End-to-End Test/acceptance criteria. Our customers, and even our internal users like Solution Architects or Customer Services, often face challenges in finding the right content whenever they need it.
If Generative AI is able to understand the user’s question, read detailed documentation on their behalf, identify the relevant section, and summarize the solution in one place, like a virtual assistant, it will be a huge efficiency booster. Just think about how users can find the right information quickly by interacting with a chatbot instead of reading hundreds of pages of documents.
So, what are the main requirements specifically? With the aid of some AI prompts, I came up with three areas:
Retrieval and Cross-Referencing:
Quickly locate relevant information within large document repositories.
Identify and create links between related documents and sections.
Personalization:
Customize documentation based on user roles and technical expertise.
Continuous Improvement and Feedback:
Track document usage and identify areas of confusion or difficulty.
Gather insights from user queries, tickets, and feedback to address recurring issues.
Benefits of a Foundation Model (FM)
Training your own Natural Language Processing (NLP) model usually takes 6-12 months, which can be too long for a pilot project. The good news is that by leveraging Foundational Models (FMs), the timeline for a pilot project can be shortened to just 2-3 months.
What are FMs? FMs refer to large-scale machine learning models that have been trained on vast amounts of data and can perform a wide range of tasks. These models serve as the foundation for various AI applications. FMs can be used as a starting point for building more specialized AI models. Think of FMs as smart fresh graduates from a top university. They can quickly learn job-specific knowledge and context to perform the job properly.
To build the AI application, you will rely on APIs to connect to the FM and train the FM with your own documents and data. There are techniques such as Retrieval-Augmented Generation (RAG) and Fine-tuning to improve accuracy. For this conceptual pilot, RAG would be critical as it combines the strengths of two approaches: FM, a Large Language Model, and Information Retrieval, which finds the relevant documents for a given question, incorporates retrieved information, and answers with facts and data.
How to start
Now that we’ve confirmed a conceptual pilot and decided on the FM approach, how do we start the project? An AI pilot project can be broken down into 4 steps:
Data Preparation and Model Training
Gather existing technical documents (implementation guides, service guides, API specifications).
Clean and standardize documents, eliminating inconsistencies and errors.
Extract relevant information using NLP techniques (entity recognition, sentence parsing, information retrieval).
Train the model on retrieval, cross-reference, and user-tailored content.
Build/Improve the System
Develop a user-friendly interface and ensure accessibility for different user roles and technical expertise levels.
Integrate the AI models with existing content management systems and documentation workflows.
Benchmark against human responses, fine-tune based on evaluation results and feedback from domain experts.
Conduct rigorous testing with internal users and domain experts.
Internal Evaluation
Introduce the system to internal teams within the organization for controlled initial use and feedback.
Gather user feedback and data on system performance and adoption.
Deploy and Monitor
Select a smaller group of external users or target audience for pilot testing in a live environment.
Monitor user engagement, document interactions, and feedback to validate the system’s real-world impact.
I hope the steps above can help you think about a first AI project for your existing product. Whether it’s to improve efficiency, create a personalized user experience, or grow a new business, the opportunities are abundant.
Sherman Jiang is a product leader with a proven track record of success at Fortune 500 companies like Visa, HSBC, and Synchrony, and has honed expertise in Silicon Valley’s fast-paced tech scene. His passion lies in empowering payment and fintech companies through the power of Agile and AI augmentation. He specializes in leading team transformations, product strategy, product discovery, design, development, and go-to-market execution. He is also enthusiastic about how generative AI can make product managers better. You can reach him at:
Email: [email protected]
Reference
https://platform.openai.com/docs/api-reference ChatGPT APIs
https://openai.com/customer-stories/stripe OpenAI API case study
https://platform.openai.com/docs/guides/prompt-engineering 6 strategies for prompt engineering