We’ve enjoyed the great privilege of working with clients across industries as they embrace innovation and modernize their businesses with technology. Our work is often focused on building or improving some aspect of the way these clients capture, organize, store, or analyze information that they believe will be important to how their organizations function. And hopefully, these efforts result in meaningful knowledge that is then used to make informed decisions, automate mundane and repetitive tasks, collaborate more efficiently, and meet the challenges of complex and dynamic landscapes.

Viewed through this lens, it becomes clear that helping our clients build and mature their Knowledge Management capabilities is often our goal. We’ve seen some amazing outcomes, but recognize that these results definitely aren’t achieved easily! Building effective knowledge management capabilities requires organizations to overcome both immediate and ongoing challenges. A few of the more pervasive challenges include:

  • Data volume and variety are at all-time highs: It seems like a common refrain we’ve all heard so much over the last five years. But consider some hard numbers; the global volume of data is expected to reach ~181 zettabytes by 2025, up from 64 zettabytes recorded in 2020, and 80-90% of this data will be unstructured: emails, documents, social media content, images, video, etc. Generative AI will fuel this surge as rapid adoption continues, and people use this technology to produce new multi-modal content (text, images, video) at a much higher rate. The various formats and contexts of this unstructured data, combined with the sheer volume organizations will need to consider, create challenges in ensuring consistency and accuracy at scale.
  • Like it or not, we still live in data silos: Isolated pockets of data (documents, excel, code, etc.) within different systems where communication is limited (or non-existent) often lead to inefficiencies and redundancies; varying formats of the same piece of information, access controls that are not properly aligned, etc. Put more simply, most organizations still deal with isolated, poor-quality data. One obvious impact is that information that could be useful as core organizational knowledge is never considered.
  • Stuck in the “manual” holding pattern: The ongoing time and resource intensity of manual processes related to knowledge management can be a significant challenge. Managing and organizing information often requires human intervention to complete tasks like data entry, tagging, and classification. These tasks can be extremely time-consuming and divert people’s attention from engaging in more impactful activities. For example, a study published by workfellow.ai reports that office workers may spend as much as 50% of their time creating or updating documents and up to 10% of their time engaged in manual data entry.

Now for the good news. Many organizations have made headway in improving the quality of their organization’s core knowledge (information that has real business impact) by systematically addressing these challenges.They’ve invested in improving their processes to smooth the flow of information, empowered their people with the freedom to make impactful, logical decisions, and applied technology to assist along the way.The “enterprise search” movement that began in the 2000’s is a prime example. When done well, enterprise search systems improve productivity, enable efficient retrieval of important information, and enable positive employee experiences and satisfaction.

Even more importantly, we believe we are at the beginning stages of a leap forward in Knowledge Management capabilities that will be fueled by generative AI. Effective use of generative AI has already become a differentiator for organizations that have moved forward more aggressively with adoption. In the most recent McKinsey Global Survey, “Gen AI Leaders” attribute ~10% of their earnings before interest and taxes (EBIT) to their use of generative AI.We expect adoption to continue and the impact on knowledge management to meet or exceed the impact felt in other areas of our professional and personal lives.

Common Generative AI Patterns

It’s exciting for us to see that AIS clients are already using generative AI to enhance their organizational knowledge and address the challenges listed above. To date, we’ve found that clients who have adopted generative AI are most often seeking assistance in wrangling document content or other unstructured data.

These organizations are now using the following patterns consistently:

  1. Document Summarization: Summarizing lengthy document content makes it easier for employees to grasp essential information.
  2. Content Classification and Tagging: Automatic categorization and tagging of documents, reducing the manual burden required to add and continually update metadata.
  3. Knowledge Extraction: Identifying key information and insights from documents, such as names, dates, and other critical data points, and creating structured data from unstructured sources.
  4. User Interaction and Behavior: Analysis of users’ interactions with existing systems and websites to understand their behavior and preferences. Using this knowledge to prioritize feature development, shape user responses, and recommend relevant content.
  5. Language Translation and Localization: Translating documents into multiple languages, making this knowledge more broadly accessible.
  6. Chatbots and Virtual Assistants: Focused assistants that use their unstructured business data as bounding context. This helps employees access information more efficiently, answer questions about specific business functions, and perform tasks by conversing naturally. Our work has been focused primarily on text-based interactions to date, but we are seeing growing demand for other modes of interaction.
  7. Compliance and Risk Management: Shredding and analyzing documents and technical artifacts for compliance with regulation.

From Learnings to Product Launches

Though we had been using earlier GPT model versions in our work through services like GitHub Copilot, the release of GPT 3.5 in late 2022 was an eye-opening event for us. The model was much more capable and accurate, which immediately made it more relevant to a broad range of knowledge management scenarios. This was the moment we began to more fully understand how impactful generative AI would be to our business.

Since then, we’ve had the opportunity to work with many of our clients to prove the value of generative AI for their knowledge management initiatives, and then scale these capabilities across their businesses. We also focused significant time and effort to build and launch a generative AI-based proposal development tool – pWin.ai.

Key Generative AI Concepts to Know

We’ve learned a lot during this time and feel fortunate to have had hands-on, deep experiences applying gen AI technology. Reflecting on these experiences, we’ve begun compiling a list of important concepts that are valuable for knowledge workers to understand. The technology is evolving rapidly, as is our place in the ecosystem. So, any list of important concepts will inevitably evolve.

1. Training and Inference

At the core of artificial intelligence are two essential processes: training and inference.

Training is the “learning” phase where an AI model is taught how to perform its task. Think of it like teaching a student using textbooks. The model is fed a large amount of data (text, images, etc.) and learns patterns, relationships, and structures from this data. The goal is to enable the model to understand and generate content that is similar to what it has seen during training.

Inference is the “using” phase where the trained model applies what it has learned to generate new content or make predictions. Continuing the student analogy, inference is like the student taking an exam or writing an essay based on what they have learned. The AI model takes new input (a prompt or question) and produces an output (generated text or image) based on its training. In simple terms, training is about learning from data, and inference is about using that learning to create new content or make decisions.

Why it’s important: Knowledge workers will spend most of their time and effort interacting with generative AI models in the inference phase. It’s important to understand the adoption of generative AI for knowledge work will not usually require training large generative AI models. Instead, most knowledge workers will rely on inference calls to frontier models trained by others.

2. AI Orchestration

A prompt is an instruction given to a generative AI model to guide its output (inference). It can be a question, a statement, or text that tells the model what kind of response to generate. “Prompt orchestration” coordinates various prompts to the model to ensure they work together smoothly and in the correct sequence to produce a logical result.

For instance, let’s assume we want to build a presentation with generative AI. One prompt might instruct a model to create text, another might ask a model to generate images, and a third might combine them into a presentation. In this simple example, sequencing and selection of the appropriate model are important, and orchestration helps to keep things straight.

In addition to only orchestrating the sequence of prompts to models, most generative AI tools that go beyond the most basic scenarios (ChatGPT, for example) will need to coordinate with other systems. Often, they will need to query data sources, call APIs, or search the web to get information to use for context before responding to a user.

Why it’s important: Orchestration ensures that important tasks happen in the right order and work together well, much like a project manager ensures each team member’s work fits together to complete the project successfully. Even if it is executing primarily behind the scenes, orchestration is the important “glue” that allows knowledge workers to use generative AI to accomplish domain specific, complex tasks.

3. Retrieval Augmented Generation (RAG)

RAG is the (overly) technical sounding name for an orchestration technique that helps to improve the accuracy of AI-generated content by combining information retrieval and content generation.

In Retrieval, orchestration retrieves relevant information from a database, documents, or other knowledge store to ensure that the system has access to the most relevant information. Think of this step as a student looking up facts in a library or doing internet research before writing an essay.

In Generation, orchestration then uses the retrieved information to augment its prompt to the generative AI model for a specific answer. Think of this step as that same student writing the essay based on the facts discovered in the research.

Why it’s important: Knowledge workers rely on accurate, domain specific information to make key decisions. RAG assists by improving the accuracy and relevance of domain specific generative AI content.

4. Information Chunking

In the context of information management, “chunking” is the process of breaking down large pieces of information into more manageable pieces, or “chunks.” For example, if you’ve asked a generative AI model to summarize a long document, a chunking routine can assist by dividing the document into smaller sections for more efficient processing. There are many different approaches to chunking content, and the choice of approach depends on the nature of the content and intended use.

Why it’s important: Chunking improves generative AI’s ability to manage and generate content from extensive and complex sources of information. A good chunking strategy is also required for effective RAG implementation.

5. Data Augmentation

Breaking content into smaller chunks for efficient processing is one component and a good start. But we can go beyond that to improve the quality of our interactions with generative AI systems by augmenting the quality of our underlying data. Data Augmentation helps with this; we use an LLM to understand meaning and often hidden structure of a piece of unstructured data, and then annotate that data for future use.

Why it’s important: This process unlocks hidden value in your data, and subsequent calls to that data (through RAG or other processes) can then use this data most effectively.

6. Embeddings

Embeddings are a way to represent words, phrases, images, or other data as numerical vectors. These vectors capture the meanings and relationships between the data in a form that generative AI models can understand and process.

Think of embeddings like a map where similar concepts are located close to each other. For example, in an embedding space, the words “king” and “queen” would be near each other because they are related in meaning, while “king” and “banana” would be far apart.

Embeddings themselves are also generated by specialized generative AI models. For more info, check out our 5-minute video overview that explains embeddings.

Why it’s important: Embeddings are fundamental to generative AI models; all prompts, supporting context, and instructions to a generative AI model are converted to embeddings before processing. In addition, we will often embed our stored context for efficient use by generative AI models.

7. Vector Databases

Vector databases are a specialized type of database designed to optimize the storage and retrieval of embeddings. Think of these as smart filing systems that organize information based on meaning. When you search for something, the vector database can quickly find relevant items by comparing their embeddings, even if the exact words you used aren’t present. Vector database functionality is also beginning to show up in many large, cloud-scale database systems.

Why it’s important: The need for relevant information quickly for inference continues to explode. Vector databases provide a means to efficiently store and retrieve the meaning of your information by providing optimized databases for embeddings. Vector databases now play a crucial role in effective RAG implementation.

8. Multimodal Generative AI

Multimodal generative AI systems can understand and generate content across different data types, such as text, images, and sounds. These are the “multiple modes.” Think of it like a person who can not only read and write stories but also draw pictures and compose music.

In some cases, a generative AI system may also call specially trained models to fully achieve this capability. For example, an inference call may be made to a large language model like GPT-4 to interpret a text prompt. If the model understands the ask to be a request to generate an image, an inference call is then routed to a model suited for image generation such as DALL-E, which can then respond with an image.

The other thing to understand is that generative AI models themselves are now multi-modal. GPT-4, Gemini, and other leading frontier models can accept text or image inputs, interpret meaning, and output text.

Why it’s important: The volume and variety of unstructured data organizations need to use to build their core knowledge bases is expanding rapidly. Multimodal generative AI offers the ability to programmatically understand the meaning and relevance of this unstructured information, which should ease the burden on knowledge workers to manually interpret the meaning. Additionally, the generation of new content that builds on existing organizational knowledge should no longer rely solely on knowledge workers to generate from scratch.

Conclusion

This post explored core knowledge management challenges organizations face and highlighted how organizations have applied generative AI to address these issues. Techniques like document summarization, content classification, and knowledge extraction are common patterns organizations are using to build value. The post also covered essential concepts in generative AI that knowledge workers should become familiar with. These included training and inference, AI orchestration, and retrieval-augmented generation (RAG), highlighting their importance in enhancing knowledge work.