Predictions for AI in 2023

Predictions for AI in 2023

·

7 min read

I've worked at an innovative venture capital firm, a massive technology consulting firm, and an exciting AI product startup. I've been able to explore many facets of AI development and utilization from its infancy (when everyone was still figuring out Juypter Notebooks and Docker).

These are some of my short-term predictions for AI, and a bit of my reasoning behind each prediction.

Context Platforms are Coming

I believe LLMs like ChatGPT and Bard are the most sophisticated systems in existence, but I mean that in the classical definition of "sophistry" where these systems are incredibly good at producing arguments that sound clever and feasible but are often without reasoning, logic, or facts.

LLMs have the major challenge of "hallucinations" in the model outputs where they will provide answers to questions that seem well-reasoned but are factually or logically incorrect. Google's Bard model famously gave a factually inaccurate answer in one of their advertisements.

This isn't surprising, as OpenAI states explicitly regarding ChatGPT:

  1. The model was not trained on data past December 2021, so it is relatively stale.

  2. "ChatGPT may produce inaccurate information about people, places, or facts"

That said, it turns out that when you provide these LLMs with explicit context for what you want to answer along with your prompt then that seems to overcome these issues for a majority of cases. Stephen Wolfram of WolframAlpha and Mathematica wrote about how his platform could be utilized to provide factual context to correct GPT.

There are two ways that I consider context concerning LLMs:

  1. What relevant information do I need to answer my query?

  2. How can I phrase my query to get the response I want?

Relevant Information

We know that when we explicitly provide relevant information to our LLMs we get far more relevant responses. This is so simple I will demonstrate it here with the text from this article on SBF:

...

Now the question becomes "How do I get a relevant chunk of text?"

One way is that you could search and find an article or page that you think is relevant and feed that into ChatGPT as explicit context and then ask it your question.

This method works pretty well, and some very cool projects like ChatPDF are built on this idea of just explicitly asking for the context (in that case, a PDF document) that a person wants to interact with. Unfortunately, this doesn't scale across the thousands of documents and websites that are utilized within an enterprise.

Relevant Information... at Enterprise Scale!

"Well, let's just throw all of our documents in as context."

That works to an extent. Unfortunately, GPT has limits on the size of the prompt that prevent individuals from throwing more than just a few documents into a prompt at any given time. Not to mention cost scales with the size of the prompt, making this method increasingly expensive.

Fortunately, rapid information retrieval across massive document storage has been an active area of research and development for decades. Popularity with LLMs has resulted in renewed interest in developing and productizing these systems further for usage in concert.

Probably the most prevalent method right now is searching across text embeddings which seems to have the advantage of preserving forms of semantics (not just the words but their relationships and intentions) from a user query that is entered into the search. The details of how it works are rather academic but I think Cohere does a good job of explaining things. One important thing to understand is that this approach takes sentences (which are very hard to compare) and simplifies them into sets of numerical vectors that are much easier to compare.

So far OpenAI has services like its built-in embeddings API to help with this issue but I don't believe they are focusing efforts on this and are rather focused on growing the capabilities of their LLMs. New products and even pivots of existing products are emerging to generate, store, and search these vectors at enterprise scale like Weaviate and Pinecone.

Relevant Queries

Now that we know we can rapidly find and use information that can answer our prompts, another challenge is that the phrasing of your prompts can be even more important in some cases. Niklas Heidloff gives great examples of this out in the wild.

What has resulted is "prompt engineering": a skillset of crafting the right LLM prompts to get intended outcomes. A market has emerged for some of the most successful prompts. Even the CEO of OpenAI himself has recognized that this is a learnable skill set that is increasingly valuable.

As with all skills that rely on data and information, certain components of the skillset will become formalized and productized. Langchain is a growing toolset based around prompt engineering that allows developers to experiment and template with prompts, pass information between LLMs and other systems like Wolfram Alpha, and even "chain" prompts together to perform more complex tasks.

Tools that retrieve information and manage prompts can be combined either in frameworks or fully-managed platforms. These "context platforms" are in the early stages; right now we are seeing the deep technical implementation that individuals will build applications around, rather than first-class products with solutions vertically integrated, but I believe by 2024 we will see our first platform that marries these concepts together and sells a whole solution to customers.

LLMs will be Interchangeable

OpenAI has recognized and openly spoken about all the limitations I've pointed out above. While they continue to invest in some interfaces to their LLMs (such as the ChatGPT website) and some helper tools, I believe their core investment will be in improving and serving their models as APIs to broad consumers. This is both due to product-business alignment and also because they have a growing swatch of paying customers who use these APIs to build competing interfaces. Focusing on LLM performance will be increasingly important as the battle for AI supremacy heats up between some of the biggest players.

Where we will see some of the biggest startups, innovations, and creative applications will be in the clever combination of LLMs and the aforementioned Context Platforms to tackle problem sets stratified by industry. As such, I think these LLMs will become interchangeable either through development work on the part of the companies or platform owners to write the middleware necessary, or an agreed API standardization between companies.

Much like cloud providers, operations will be relatively interchangeable between LLM providers. Companies will be able to switch LLMs when one demonstrates higher performance or possibly better data privacy and compliance (a FEDRAMPed LLM, anyone?).

I welcome discussion here, as this is probably the most controversial of my takes.

LLMs Are Going Behind the Firewall

Jack Ryan working with an AI Robot, made in Playground AI

For those curious, my prompt in Playground AI for the above image was "CIA Agent Jack Ryan working with a Cyberpunk Artificial Intelligence Robot".

When GPT models were first being served by OpenAI, curious minds began deducing the potential costs of the computing required to train and run these models. Projections ran roughly into millions of dollars to do initial training, and hundreds of thousands of dollars to host and maintain each month. Pocket change for the likes of Google and Microsoft but unapproachable for your average startup.

Then something incredible happened: Facebook/Meta's own LLM Llama was leaked. Llama is significantly smaller, and with fine-tuning many are finding that it can approach ChatGPT's capabilities for specified tasks.

While there are still questions about licensing, it's fair to say that the cat is out of the proverbial bag and initial skepticism about LLMs running in edge and air-gapped environments is being sufficiently squashed with new increases and performance and optimization around self-hosted LLMs popping up every week (they can run on a Pixel 5). One amazing project to follow here is the Vicuna project, which is performing research in making these models more performant and portable. Databricks is also doing incredible work here with their Dolly project.

I too was initially skeptical about how capabilities like ChatGPT could be brought to my customers in the Defense and Intelligence Communities of the United States. Now, I'm more confident than ever that LLM providers will begin to feel the pressure of delivering these enhancements so organizations can maintain their air-gapped data privacy while still advancing AI capabilities.

What About 2025?

I won't even try to predict what AI could look like in 2025. We are seeing exponential advancement in this space and new tools/paradigms could emerge that push that growth even more.

As organizations and individuals adapt to these changes, it will be crucial to understand the implications of AI development and how to harness the power of LLMs effectively. By staying informed and prepared, businesses can leverage the advancements in AI to create powerful solutions and reshape industries. The future of AI is bright, and those who embrace these changes and adapt to the new reality will undoubtedly thrive in this dynamic ecosystem.

Did you find this article valuable?

Support Nic Acton by becoming a sponsor. Any amount is appreciated!