Stefan Baeuerle, Author at ·θΕΔ΄«Γ½ News Center Company & Customer Stories | ·θΕΔ΄«Γ½ Room Tue, 02 Apr 2024 13:07:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Combine the Power of AI with Business Context Using ·θΕΔ΄«Γ½ HANA Cloud Vector Engine /2024/04/sap-hana-cloud-vector-engine-ai-with-business-context/ Tue, 02 Apr 2024 13:00:00 +0000 /?p=223947 The new ·θΕΔ΄«Γ½ HANA Cloud vector engine enables businesses to combine the power of large language models (LLMs) with company-specific, real-time data and business process know-how, all integrated in one multi-model database: ·θΕΔ΄«Γ½ HANA Cloud. With the latest quarterly release, the vector engine is now generally available.

·θΕΔ΄«Γ½ HANA Cloud is a market-leading database-as-a-service enabling intelligent data applications and is one of the most adopted services within ·θΕΔ΄«Γ½ Business Technology Platform (·θΕΔ΄«Γ½ BTP) internally at ·θΕΔ΄«Γ½. As of today, more than 180 different applications and services use ·θΕΔ΄«Γ½ HANA Cloud with its multi-model capabilities.

Now, ·θΕΔ΄«Γ½ HANA Cloud is also a leader in the generative AI age.

At ·θΕΔ΄«Γ½, we work with various LLMs such as GPT-4, Llama2, Falcon-40b, and Claude2. While these models offer amazing opportunities, they also have limitations. For example, LLMs may rely on outdated training data and lack company-specific data and business process context.

As an example, imagine having an LLM as a colleague. This colleague would be very intelligent, able to program, pass exams, or have arguments – but this colleague would not know anything about what happened in the world in the past year, nor have any idea about internal processes of your company or any of your systems. Even worse, after every conversation you have, this colleague would forget what you just talked about. Working with such a lack of memory would be of limited value. This shortcoming is why an LLM cannot answer easy questions like β€œWhat do you think about the offer from our most important supplier last week?” An LLM can only work with the initial training data – all other data must be provided as context.

Supplementing this lack of information is where ·θΕΔ΄«Γ½ HANA Cloud vector engine can assist. The engine can provide LLMs with all the relevant data of an organization through a process called “retrieval-augmented generation.”

Build and deploy intelligent data applications at scale with ·θΕΔ΄«Γ½ HANA Cloud

A Game-Changing Feature

So how does the vector engine work? It is a new addition to ·θΕΔ΄«Γ½ HANA Cloud’s multi-model engines, enabling customers to utilize the similarity between two or more vectors to solve business problems. With the integration of AI-focused technology, ·θΕΔ΄«Γ½ HANA Cloud can now empower businesses to combine intuition along with data-driven insights to solve even the most complex of problems.

Some key benefits and features of the vector engine include:

  • Multi-model: Users can unify all types of data into a single database to build innovative applications using an efficient data architecture and in-memory performance. By adding vector storage and processing to the same database already storing relational, graph, spatial, and even JSON data, application developers can create next-generation solutions that interact more naturally with the user.
  • Enhanced search and analysis: Businesses can now apply semantic and similarity search to business processes using documents like contracts, design specifications, and even service call notes.
  • Personalized recommendations: Users can benefit from an improved overall experience with more accurate and personalized suggestions.
  • Optimized large language models: The output of LLMs is augmented with more effective and contextual data.

The Database Foundation of ·θΕΔ΄«Γ½β€™s Generative AI Strategy

The addition of the vector engine establishes ·θΕΔ΄«Γ½ HANA Cloud as the default database in ·θΕΔ΄«Γ½’s generative AI solution strategy. Customers can create the next level of user experiences along with other services within ·θΕΔ΄«Γ½ BTP. As an example, ·θΕΔ΄«Γ½ BTP can provide centralized access to SaaS-based LLMs from multiple vendors as well as host LLMs from open-source models or third parties. The generative AI hub in ·θΕΔ΄«Γ½ AI Core, a capability that facilitates the use of generative AI capabilities, will soon rely on ·θΕΔ΄«Γ½ HANA Cloud as the primary vector storage. One function of the generative AI hub feature is to help provide a process for creating embeddings and storing the resulting vectors in ·θΕΔ΄«Γ½ HANA Cloud. Customers building intelligent data applications can use both services together to augment LLM queries with relevant context for meaningful answers.

·θΕΔ΄«Γ½ is working on foundation models that are specific for ·θΕΔ΄«Γ½-related industry and process knowledge.

The Database for Innovation

·θΕΔ΄«Γ½ HANA Cloud continues to lead the market by storing and processing different types of relevant business data – all within the same database. The new vector engine, combined with other multi-model capabilities, opens a world of possibilities for applications to help enhance the execution of business processes. Whether improving search capabilities, gaining deeper insights for informed decisions, or optimizing LLMs, ·θΕΔ΄«Γ½ HANA Cloud enables the type of applications that can elevate the expertise and effectiveness of every user.

To learn more, sign up for . Do you already have a use case for ·θΕΔ΄«Γ½ HANA Cloud vector engine in mind? If so, consider registering for the .


Juergen Mueller is CTO and member of the Executive Board of ·θΕΔ΄«Γ½ SE, Technology & Innovation.
Stefan Baeuerle is head of Database, ·θΕΔ΄«Γ½ HANA Database, & Analytics for Technology & Innovation at ·θΕΔ΄«Γ½.

Receive weekly news highlights from the ·θΕΔ΄«Γ½ News Center
]]>