Walter Sun, Author at 麻豆原创 News Center Company & Customer Stories | 麻豆原创 Room Tue, 17 Feb 2026 20:00:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 AI in 2026: Five Defining Themes /2026/01/ai-in-2026-five-defining-themes/ Fri, 09 Jan 2026 09:15:00 +0000 /?p=239677 AI is quickly evolving from a set of powerful tools to a central component of the competitive enterprise. Specialized models, AI agents, and AI-native architecture will ensure that AI continues to embed itself into the very core of enterprise operations鈥攚ith potentially powerful benefits.

To navigate AI鈥檚 evolution, organizations need to understand that it鈥檚 no longer just a question of “What can AI do?” but “How do we set our organization up for success with AI? How do we build for it? What problems do I solve with which models? How do we govern it?”

Looking ahead to five critical themes that will define enterprise AI in 2026, these present both opportunities and challenges for organizations. Let’s dive in.

Create transformative impact with the most powerful AI and agents fueled by the context of all your business data

1. New categories of AI foundation models unlock enterprise value

Advances in generative AI stem from breakthroughs in 鈥渇oundation models,鈥 massive neural networks trained on vast amounts of data that can be adapted to a wide range of tasks.

Large language models (LLMs) were the first wave of foundation models at scale. General-purpose LLMs, trained on the equivalent of all the text on the internet, opened the door to many value-adding use cases, including summarizing documents, writing code, and powering applications like ChatGPT and Claude. Over the last few years, we have already seen the foundation model approach applied to other domains, such as video creation and voice.

In 2026, specialized foundation models optimized for specific data types and domains will power the high-value enterprise AI use cases. Video generation models have already shown that models grounded in real-world physics data can reason about scenes and physical dynamics. Emerging world models demonstrate that simulating the physical world unlocks new possibilities in simulation, synthetic training data, and digital twins. Vision-language-action models demonstrate that robot-specific foundation models can generalize to new tasks and environments, enabling the transformation of web-scale knowledge into real-world actions in logistics and manufacturing.

In the enterprise domain, a similar shift is underway for structured data found in databases and transactional business software. While LLMs are impressive across many enterprise use cases, they cannot handle tasks like numerical predictions, such as inferring a delivery date or supplier risk score. However, work on relational foundation models shows that training on structured datasets鈥攆or example, data in tables, rather than generic text or images from the internet鈥攃an deliver high predictive accuracy without the tedious feature engineering and training required in classical machine learning. This means organizations can deploy predictive models in days, not months. Recent launches of relational foundation models, such as 麻豆原创-RPT-1, Kumo, and DistilLabs, highlight how new models can directly support use cases like forecasting, anomaly detection, and optimization across ERP, finance, manufacturing, and supply chain scenarios.

In 2026, these specialized models are expected to scale to deliver superior performance and economics for structured business tasks, surpassing general-purpose LLMs and state-of-the-art machine learning algorithms. These models will emerge as the workhorses behind high-value enterprise tasks.

2. Software evolves toward AI-native architecture

AI has seen various approaches create value over the decades, from the first rules-based expert systems to probabilistic deep learning and the recent explosion in generative AI. In 2026, organizations will shift from enhancing existing AI applications and processes to AI-native architectures, which will fully realize the promise of modern AI.

AI-native architecture adds a continuously learning, agentic intelligence layer on top of deterministic systems, enabling applications to become intent-driven, context-aware, and self-improving rather than being statically coded around fixed workflows. Agentic systems will still only be as good as the context layer they can reliably retrieve and ground on. Here, organizations should invest in truly comprehensive, semantically rich knowledge graphs that provide a scalable source of context, making AI-native software dependable and self-improving.

Enterprise applications will increasingly be built natively around AI capabilities, featuring user experiences designed for multi-model, natural language interaction; AI agents reasoning through complex processes; and a foundation managing foundation models, services, and a knowledge graph capturing semantically rich business data.听AI-native architecture also enables more employees to create apps鈥攕uch as smaller, ad-hoc productivity applications鈥攊n a matter of minutes without straining IT.听

AI-native architecture builds on, and even requires, established SaaS principles and investments in modern cloud applications. The technical term for combining probabilistic, adaptive AI models with deterministic systems of record is called neurosymbolic AI. It brings together AI鈥檚 best capabilities to adapt with reliable, governable, and deterministic processes. Next-gen applications will not just have AI bolted on; they鈥檒l be built around AI at their core. This means combining reasoning, business rules, and data to deliver insights and automation seamlessly. Imagine ERP systems that proactively flag anomalies, recommend actions, and even execute workflows autonomously鈥攁ll while staying aligned with company policies and regulations.

3. Agentic governance becomes mission-critical

Over the past two to three years, generative AI has introduced a wave of value-added use cases. These use cases were largely based on users sending a prompt to a model, receiving a response, and then interacting with the model again.

Last year saw the start of the next wave of innovation: AI agents capable of planning and iteratively reasoning through multi-step tasks, including selecting tools, self-reflecting on progress, and collaborating with other AI agents. These advanced AI agents promise to tackle complex business processes that were previously immune to automation, such as analyzing myriad documents, records, and policies to or .

However, the proliferation of AI agents, many of which handle critical tasks and sensitive data, demands the development of new capabilities. Agentic governance will emerge as a critical capability as organizations deploy hundreds of specialized AI agents. The “agent sprawl” challenge will mirror previous shadow IT crises, but with higher stakes given agents’ autonomous decision-making capabilities.

Forward-thinking enterprises will establish comprehensive governance frameworks addressing five dimensions: agent lifecycle management (version control, testing protocols, deployment approval, retirement procedures); observability and auditability (agent inventory, logging, reasoning paths, and action traces); policy enforcement (embedding business rules, regulatory constraints, and ethical guidelines into agent execution); human-agent collaboration models (defining autonomy boundaries, approval requirements, and escalation pathways); and performance monitoring (tracking accuracy, efficiency, cost, and business impact).

The organizational shift will prove profound鈥攆rom viewing AI as an independent tool to managing agents as digital coworkers requiring onboarding, performance reviews, and continuous improvement. HR and IT functions will collaborate on “digital workforce management” as organizations treat agentic governance as seriously as they do traditional workforce oversight.

4. Intent-driven ERP and generative UI emerge as a new user experience

Consumers are becoming increasingly familiar with computer interactions requiring prompts in natural language, voice, and even images and gestures. At the same time, generative AI鈥檚 ability to create text, graphs, code, and HTML on the fly is improving rapidly. In parallel, AI agents enable users to simply express their intentions, allowing the agent to determine how to work toward achieving that goal.

These advancements open the door to varied and entirely new modalities for users to work with enterprise software, as well as 鈥渘o-app ERP鈥 experiences. For example, to book a customer visit, a worker typically needs to open an analytics application to review the account, look in the CRM system to retrieve the customer鈥檚 address, and then navigate to another application to book travel, among other tasks. 

In 2026, we will see 鈥済en UI鈥 experiences increasingly surface via digital assistants, relieving users from the need to navigate between multiple applications and perform manual tasks. With time, AI will allow the user to simply express the intent: 鈥淧repare a trip to my customer with the most leads.鈥 From here, an AI agent will plan out the steps and required systems, interacting with the user to confirm travel details while dynamically generating analytical graphs and briefing material in the window. As AI agents develop stronger calculation and prediction tools, users will be able to “speak to their data” more naturally, with agents making data-based decisions in the background. To be clear, interactions with agents will extend far beyond a chat box; organizations will enjoy rich visualizations, complete workflows, and the ability to build hyper-personalized apps with just a few commands.

The user interface will not disappear. No-app ERP experiences and autonomous agents require the same foundational substrate that humans rely on for their daily work: structured workflows, security, governance, and business logic defined in business applications. The difference is that agents consume these primitives programmatically at scale, not only through a GUI, and humans can interact with these agents via natural language without ever needing to open the application.

These capabilities will usher in a new paradigm for human-AI collaboration and productivity in the workplace. Personalized experiences and adaptive workflows across applications and data sources will lower adoption barriers. This ability to focus solely on achieving a user鈥檚 intention, regardless of the interaction modality and underlying systems, will drive return on investment (ROI) in AI and enterprise software.

5. Deglobalization drives sovereign AI offerings

AI sparked debates about digital sovereignty among nations due to AI鈥檚 potential impact on everything from scientific discovery and national security to economic productivity and even culture. Events in geopolitics, such as supply chain disruptions caused by tariffs and war, have only intensified the urgency that many nations and organizations feel to become digitally sovereign.

Digital sovereignty has two broad definitions. First, digital sovereignty is an information security designation governing data storage and access, such as U.S. FedRAMP and German VSA, required to process sensitive governmental data in a 鈥渟overeign cloud.鈥 Second, and more broadly, sovereignty refers to the provenance of physical assets, intellectual property, legal jurisdiction, and services along the cloud stack. For example, does an application utilize an AI model created in Europe, the U.S., or China, and is the data center geographically isolated?聽

The high stakes, geopolitical uncertainty, and complexity of 鈥渟overeign AI鈥 will lead enterprises to increasingly demand AI and cloud solutions that are simultaneously cutting-edge, flexible, and fully sovereign. This intensifies the shift from globalized one-size-fits-all cloud to regionally compliant, AI-powered enterprise platforms. At the same time, governments will continue to refine their national AI strategies to invest in areas along the stack where they can compete and create value.

Executing on the 2026 AI themes

In 2026, AI is poised to move from a supporting tool to a fundamental pillar of the enterprise. This shift is driven by a convergence of defining trends鈥攊ncluding increasingly capable agents, generative UI, and AI-native architecture鈥攖hat push AI from the application layer and into the very core of business operations.

Organizations that thrive will be those that recognize this shift and build an enterprise that is purpose-built for AI: establishing robust governance to manage a new, collaborative workforce of humans and AI agents; embracing gen UI to lower adoption barriers and an intent-driven user experience that helps employees interact naturally; seeking out specialized foundation models that are precisely tuned for enterprise use cases to drive business value; and, finally, building applications natively around AI that combine reasoning, business rules, and data, delivering proactive insights and automation.

However, in 2026, organizations will still need high-quality, connected data. Data siloes severely limit the effectiveness of AI. As mentioned, AI-native architecture requires established investments in modern cloud applications that harmonize data across the entire business鈥攂ecause unified data means AI鈥檚 outcomes are more accurate and relevant.


Jonathan von Rueden is chief AI officer at 麻豆原创 SE.
Walter Sun is senior vice president and global head of AI for 麻豆原创 Business AI at 麻豆原创.
Sean Kask is vice president and head of AI Strategy for 麻豆原创 Business AI at 麻豆原创.

Get news and stories delivered straight to your inbox each week via the 麻豆原创 News Center newsletter
]]>
From Assistive AI to Agentic AI: Risks, Responsibilities, and the Road Ahead /2025/06/assistive-agentic-ai-risks-responsibilities-road-ahead/ Wed, 04 Jun 2025 11:15:00 +0000 /?p=234930 The AI landscape is evolving at breakneck speed. Previously, AI systems were primarily assistive and reactive, offering recommendations or performing predefined tasks when asked. Now they are entering the era of agentic AI: systems that operate autonomously, adapt in real time, and collaborate like digital colleagues.

Joule Agents can help your whole business run faster

But as AI becomes more independent, new risks emerge. So, how can we navigate this next frontier responsibly? This is a question that we at 麻豆原创 do not leave to chance.

From tools to teammates

Imagine you’re buying a car. You expect it to meet all safety standards, regardless of where the component parts are built or how the car is assembled. The process behind the scenes does not change your expectation of safety. The same goes for agentic AI.

Agentic AI systems are more than tools; they are intelligent agents that plan, learn from experience, self-correct, and collaborate. They’re capable of orchestrating complex processes, making decisions, and even engaging with other agents or humans to achieve a goal. However, with this leap forward comes a new layer of complexity and risk.

Core capabilities and risks of agentic AI

Agentic AI systems bring powerful capabilities like planning, reflection, and collaboration, enabling them to tackle complex tasks autonomously. They can map strategies, learn from mistakes, use external tools, and coordinate with humans and other agents.

However, each strength introduces risks. For example, flawed planning can cause inefficiencies, reflection may reinforce unethical behavior, tool usage can lead to instability when systems interact unpredictably, and unclear collaboration can result in miscommunication and compounded errors. Balancing these capabilities with proper safeguards is essential for safe, ethical deployment.

Managing autonomy: balancing freedom with control

One of the most pressing challenges with agentic AI is managing its autonomy. Left unchecked, these systems can veer off course, misinterpret context, or introduce subtle risks without immediate detection. To address this, organizations must strike a careful balance between freedom and control.

We have learned that oversight should be calibrated according to risk. High-stakes domains like healthcare or human resources demand robust human supervision, while low-risk, routine tasks can tolerate greater autonomy. Also, continuous monitoring is essential; agentic AI systems, like any complex technology, require regular checks to ensure quality, compliance, and reliability.

A key element of this oversight is maintaining a 鈥渉uman in the loop鈥 approach, where human judgment is integrated into critical decision points, ensuring that automated actions remain aligned with human values and organizational intent.

This principle has been at the heart of 麻豆原创鈥檚 ethical AI approach from the beginning, reflecting our belief that AI should augment, not replace, human decision-making. To reinforce this, 麻豆原创 has introduced mandatory ethics reviews for all agentic AI use cases, ensuring that each deployment is scrutinized for ethical implications and remains aligned with our responsible AI principles.

Click the button below to load the content from YouTube.

What is Responsible AI at 麻豆原创?

Building transparency and accountability

Transparency is not just a buzzword; it鈥檚 a foundational requirement for building trust in agentic AI. From the outset, during the design phase, it is crucial to classify AI systems based on the complexity and risk of the tasks they perform. This classification guides decisions about the necessary safeguards and ensures that mechanisms for human intervention are integrated from the beginning.

At runtime, transparency is maintained through explainability and traceability. Developers and end-users must be able to understand what the system is doing and why. Crucially, accountability must always rest with humans or legal entities, never with the AI itself.

Rethinking governance and regulation

Despite the emergence of agentic AI, there have been no new regulations specifically crafted for it. Existing laws and frameworks such as GDPR still apply and provide a solid foundation for governance. However, what has changed is the level of technical rigor required to remain compliant and ethically sound. Organizations must now adopt more robust processes. They need to analyze use cases with greater precision, apply risk-based controls that match the potential impact of the AI system, and ensure that ethical and legal standards are upheld through enhanced design practices and ongoing testing.

Designing with human values at the center

Agentic AI cannot be an excuse for lowered standards. At 麻豆原创, the stance is unequivocal: Even in autonomous systems, AI must meet the highest ethical benchmarks. This means embedding principles such as fairness, transparency, and human agency directly into the design.

Ultimately, all users should be equipped with the tools and understanding they need to supervise and, when necessary, intervene in the system鈥檚 behavior.

Building trust in a black-box world

Trust in AI doesn鈥檛 happen by default; it must be intentionally built and continually reinforced. One of the most effective ways to do this is by giving stakeholders the right amount of information. Too much detail can be overwhelming and counterproductive while too little fosters blind trust or fear of the unknown. The key lies in communicating clearly about the system鈥檚 capabilities, risks, limitations, and appropriate use. Empowering users to critically assess the AI鈥檚 behavior 鈥 and to know when to step in 鈥 is central to creating a safe, secure, and trusted AI environment.

Rethinking KPIs in the AI-augmented workplace

As agentic systems, like our Joule Agents, begin handling more tasks, human roles will naturally evolve. To keep up with this shift, organizations need to rethink how they define and measure success. This starts with investing in change management and upskilling programs that prepare employees to work effectively alongside AI. It also requires redefining productivity metrics, moving beyond task completion to focus on how well humans and AI agents collaborate. Success should be measured by how efficiently teams harness AI to unlock new levels of insight and innovation.

Building AI that builds trust

Agentic AI is not just another phase; it is a transformation. But like any transformative technology, success depends on how it鈥檚 built, governed, and used.

At its best, agentic AI amplifies human capabilities, accelerates innovation, and helps tackle challenges once considered too complex. But it also demands a new level of diligence, oversight, and ethical reflection.

The future is not just about building smarter agents; it鈥檚 about building responsible ones.

Learn more:


Walter Sun is senior vice president and head of AI at 麻豆原创.

Sign up for the 麻豆原创 News Center newsletter to get highlights delivered to your inbox each week
]]>
麻豆原创 and Cohere Partner to Deliver Trusted, Scalable Generative AI for the Enterprise /2025/05/sap-cohere-partner-trusted-scalable-generative-enterprise-ai/ Tue, 20 May 2025 12:31:00 +0000 /?p=233969 Generative AI is reshaping the enterprise: transforming how work gets done, how decisions are made, and how value is created. But as businesses move beyond experimentation, the stakes increase. Enterprise adoption requires more than powerful models; it demands trust, scale, and real-world applicability.

Newly unveiled innovations and partnerships revolutionize the way work gets done

That is why 麻豆原创 is excited to announce our expanded partnership with Cohere, a leader in secure, enterprise-grade AI.

Together, we plan to bring Cohere鈥檚 powerful generative and advanced retrieval models to the 麻豆原创 ecosystem, starting with its model, and extending evaluations with , to enrich our product suite, playing an important role in powering agentic AI experiences.

These models are planned to be available alongside other leading AI models from 麻豆原创 as well as third parties in the generative AI hub in 麻豆原创 AI Core, with the intent to give customers more choice to build AI-powered solutions that meet their unique business needs.

Expanding 麻豆原创鈥檚 trusted AI model portfolio

is rooted in trust. Our customers expect and deserve AI that respects their data privacy, that it fits within their operational workflows, and that it understands the context and complexity of their industries. Cohere鈥檚 focus on security, efficiency, and enterprise applicability aligns perfectly with 麻豆原创鈥檚 approach to business AI and our generative AI hub.

Cohere Command models are lightweight, high-performing language models tailored for complex business tasks, with support for agentic workflows and multilingual operations. The Embed and Rerank models enable powerful enterprise search and retrieval capabilities, helping customers build accurate, context-aware RAG pipelines across structured and unstructured data.

Cohere models are designed to perform in production environments while respecting enterprise privacy requirements and compute constraints. And because Cohere shares our commitment to privacy-first design, these capabilities are built to serve even the most regulated industries, such as finance, healthcare, and the public sector.

麻豆原创: Launch partner for Cohere鈥檚 reasoning model

As part of the partnership, 麻豆原创 plans to be one of the聽first partners to offer Cohere鈥檚 upcoming reasoning model, a purpose-built, high-efficiency model designed to power agentic use cases.

We see enormous potential here. 麻豆原创鈥檚 vision for collaborative AI agents 鈥 capable of automating complex, multi-step tasks across systems 鈥 requires not just scale, but reasoning. Whether it鈥檚 helping consultants configure a system or enabling customer service to resolve cross-system issues, this next generation of AI requires models that can reason, plan, and act securely. Cohere鈥檚 reasoning model is built for exactly that.

We鈥檙e excited to partner with 麻豆原创 and bring its enterprise customers the latest security-first models and solutions from Cohere. We鈥檙e especially excited that 麻豆原创 will be one of the first partners to offer our upcoming reasoning model. 麻豆原创 and Cohere share a vision for practical AI innovation, and our collaboration marks an exciting milestone as we unlock new efficiencies and growth for global enterprises.

Martin Kon, President and COO, Cohere

Powering real-world applications across industries

With this collaboration, 麻豆原创 customers will be able to use Cohere models to solve pressing business challenges across industries, such as:

  • Agentic task automation: Enable AI assistants that can take actions across enterprise tools and systems
  • Multilingual RAG applications: Retrieve, rank, and summarize data from global policy manuals, compliance documents, or internal knowledge bases
  • Secure document analysis: Understand long, structured, multimodal files like financial disclosures, M&A reports, technical manuals, or medical imaging
  • Context-aware enterprise search: Improve search accuracy across unstructured content like emails, tables, or contracts

Customers will be able to easily access, test, and scale these models in production within 麻豆原创鈥檚 generative AI hub.

Expanding choice without compromise

With our partnership with Cohere, we are continuing to expand a growing ecosystem of AI capabilities that are open, secure, and business-ready. This partnership helps ensure that customers can choose the right model for their use case, while trusting that it meets 麻豆原创鈥檚 standards for quality, reliability, and compliance.

Together, 麻豆原创 and Cohere are enabling enterprises to harness generative AI with confidence, whether they鈥檙e building knowledge assistants, automating processes, or delivering new intelligent services to users.

To learn more about our approach to enterprise-ready AI, visit .


Walter Sun is senior vice president and head of AI at 麻豆原创.

Get the latest news and stories from 麻豆原创 Sapphire in 2025
]]>
How 麻豆原创 and Google Cloud Are Advancing Enterprise AI Through Open Agent Collaboration, Model Choice, and Multimodal Intelligence /2025/04/sap-google-cloud-enterprise-ai-open-agent-collaboration-model-choice-multimodal-intelligence/ Wed, 09 Apr 2025 12:00:00 +0000 /?p=233102 AI is increasingly embedded everywhere in business operations, powering automation, insight, and decision-making across systems and workflows. As part of our ongoing partnership with Google Cloud, 麻豆原创 is enabling the next wave of enterprise AI by contributing to the new聽Agent2Agent (A2A) interoperability protocol, which establishes a foundation for AI agents to securely interact and collaborate across platforms.

Boost productivity with the most powerful AI and agents fueled by the context of all your business data

This work is complemented by two additional areas of progress:聽first, the expansion of聽Google Gemini models聽in 麻豆原创鈥檚聽generative AI hub聽on聽麻豆原创 Business Technology Platform (麻豆原创 BTP); second, the use of Google鈥檚 video and speech intelligence capabilities to support聽multimodal retrieval-augmented generation (RAG)聽for video-based learning and knowledge discovery in 麻豆原创 products.

Together, these efforts reflect a shared commitment to deliver enterprise-ready AI that is open, flexible, and deeply grounded in business context.

Bringing AI agents together: laying the groundwork for interoperability

The future of work is agentic. Businesses are increasingly deploying AI agents that assist with real tasks 鈥 resolving customer issues, managing approvals, and collaborating across business functions.听This is why 麻豆原创 is delivering a collaborative agent architecture with Joule聽to support cross-functional agentic workflows across 麻豆原创 Business Suite.

But for these agents to deliver real value, they cannot operate within a single vendor landscape. They must be able to collaborate across various platforms, securely exchange information, and coordinate actions across complex enterprise workflows.听 This need for seamless interaction underscores why聽the represents a significant step beyond simple API integrations or enhanced tooling.

That鈥檚 why 麻豆原创 has joined Google Cloud and other enterprise leaders as a founding contributor to the new A2A protocol. This open standard is designed to ensure agents from different vendors can interact, share context, and work together鈥攅nabling seamless automation across traditionally disconnected systems.

Consider a customer dispute resolution scenario: a representative receives a billing inquiry via聽Gmail. Instead of toggling between tools, they can invoke聽Joule聽directly from the email. Joule, acting as an agent orchestrator, initiates a dispute resolution process, engaging another Google agent that connects to聽Google BigQuery, where relevant transactional warehouse data resides. Together, the agents validate the issue, retrieve insights, and recommend a resolution 鈥 without manual system switching, data reconciliation, or context loss.

This is the kind of cross-platform collaboration the A2A protocol is designed to enable: AI agents working together to accelerate business outcomes, reduce friction, and enable people to focus on more strategic work. It also reinforces 麻豆原创鈥檚 vision for聽Joule as an agent orchestrator聽working across enterprise workflows: interoperable, proactive, and deeply connected to business context.

Expanding access to Google models in generative AI hub

Beyond agent interoperability, 麻豆原创 is furthering its commitment to openness and flexibility by expanding access to聽Google models聽in the聽generative AI hub, a key capability of the聽AI Foundation聽on 麻豆原创 BTP.

Through the generative AI hub, customers gain enterprise-grade access to a curated portfolio of leading foundation models. That portfolio now includes Google Gemini 2.0 Flash and Flash-lite, which join the existing support for Gemini 1.5 models already available through the hub.

This expanded model choice gives customers the flexibility to build and extend AI-driven solutions using聽high-performance, low-latency models聽optimized for enterprise workloads 鈥 while staying within 麻豆原创鈥檚 secure, business context-rich environment.

By combining Google鈥檚 model innovation with 麻豆原创鈥檚 deep understanding of enterprise processes, we enable customers to apply generative AI in ways that are not only powerful, but also practical, trustworthy, and fully aligned with how businesses operate.

Unlocking multimodal understanding with Google Video Intelligence

As part of our continued collaboration with Google Cloud, 麻豆原创 is also advancing multimodal RAG, a highly requested capability among 麻豆原创 customers, especially for video-based learning content.

Multimodal RAG enhances information retrieval and generation by integrating multiple data modalities 鈥 text, images, audio, and video 鈥 into a single, structured process. This approach enriches knowledge sourcing and elevates how users interact with training and support materials.

To address the complexity of extracting meaningful insights from video content, 麻豆原创 leverages Google Video Intelligence for on-screen text detection across video frames, and Google鈥檚 Speech-to-Text API for accurate transcription of spoken audio. During the indexing process, these outputs are stored with corresponding timestamps, creating a structured foundation for retrieving relevant video segments with precision.

By grounding audio and visual content with time-aligned metadata, 麻豆原创 enables users to search and retrieve聽specific, contextually relevant moments聽within a video, making the learning experience more intuitive, accessible, and impactful.

鈥淎s agentic AI evolves, seamless handling of multi-modal data 鈥 text, voice, enterprise videos, and images 鈥 becomes paramount,鈥 said Miku Jha, director of AI/ML and Generative AI at Google Cloud. 鈥淭his introduces significant challenges for agent interoperability. An open protocol like A2A is therefore indispensable, providing the necessary framework and flexibility for agents to effectively communicate and collaborate across these diverse modalities. Multi-modality is not simply a capability; it is a foundational requirement driving the next generation of interconnected agentic systems.鈥

This is another example of how 麻豆原创 is integrating Google鈥檚 AI capabilities into business-relevant scenarios, helping customers unlock more value from their unstructured content and elevate the way knowledge is delivered across the enterprise.

Shared vision for business AI

These efforts reflect a broader strategic alignment between 麻豆原创 and Google Cloud: a shared belief in AI that is open, composable, and grounded in real business context. Whether it鈥檚 shaping emerging standards for agent collaboration, providing choice through best-in-class models, or making unstructured content actionable, we are focused on helping our customers innovate with confidence 鈥 today and into the future.

To learn more about how 麻豆原创 and Google Cloud are shaping the future of enterprise AI, visit and explore to see these innovations in action.


Walter Sun is senior vice president and head of AI at 麻豆原创.

Subscribe to the 麻豆原创 News Center newsletter and get stories and highlights delivered straight to your inbox each week
]]>
AI That Thinks, Learns, and Acts: How 麻豆原创 and NVIDIA Are Shaping the Future of Business AI /2025/03/sap-and-nvidia-shaping-future-of-business-ai/ Tue, 18 Mar 2025 20:00:00 +0000 /?p=232669 In today鈥檚 business landscape, enterprises need AI that reasons through challenges, anticipates outcomes, and takes action. That鈥檚 why 麻豆原创 and NVIDIA are deepening our work together to deliver more advanced AI capabilities into the hands of businesses worldwide.

With NVIDIA’s latest announcement of, 麻豆原创 is strengthening its agentic AI strategy to help drive even greater business impact. These models allocate more compute time before responding to improve accuracy and enhance AI agents with advanced聽 decision-making and execution capabilities. By integrating them, Joule agents will become more adept at tackling complex business challenges鈥攍everaging deeper contextual reasoning, making more precise decisions, and seamlessly interacting with enterprise data and systems to deliver more intelligent and autonomous operations.

Transforming business today: What 麻豆原创 and NVIDIA have already achieved

The long-standing partnership between 麻豆原创 and NVIDIA has already delivered AI innovations that are transforming business operations. Across industries, these AI innovations are revolutionizing how enterprises implement 麻豆原创 solutions and develop applications on 麻豆原创 Business Technology Platform (麻豆原创 BTP).

AI agents that work together鈥攁nd work for you鈥

For example, consultants working on digital transformation projects, such as those driven through RISE with 麻豆原创, are now leveraging 麻豆原创 Joule for Consultants, enhanced with microservices, to quickly surface relevant insights from 麻豆原创-exclusive content. Consultants can ask questions in natural language to instantly retrieve precise guidance from past implementations and get assistance in interpreting ABAP code. This helps reduce the time spent navigating documentation and troubleshooting complex application logic, accelerating solution design, minimizing delays, and improving overall project efficiency.

For developers, AI-driven code generation has become a catalyst for innovation. Joule for developers, powered by microservices, part of software platform, helps generate ABAP code faster and more efficiently by enabling faster AI inferences for code generation tasks. 麻豆原创 and NVIDIA collaborated on end-to-end model development for Joule for developers, including data processing and training the models. LLMs trained on ABAP and 麻豆原创鈥檚 enterprise logic enable developers to write, explain, and optimize 麻豆原创-specific code efficiently. As a result, enterprises can reduce development time, improve code quality, and accelerate innovation, modernizing their 麻豆原创 environments more effectively.

Beyond implementation and development, AI is reshaping how businesses visualize and interact with their products. Leveraging technologies, 麻豆原创 Intelligent Product Recommendation can bring real-time 3D visualizations to complex products, their offerings. Businesses can now simulate entire supply chains before making capital investments, optimize factory layouts with physics-based modeling, and enhance customer engagement through realistic, interactive product experiences that improve decision-making.

These AI-driven advancements are already reshaping operations in manufacturing, retail, and asset-intensive industries, enabling users to make more informed decisions, unlocking greater efficiency, and enhancing customer satisfaction.

What鈥檚 new: Advancing AI reasoning agents to transform work

“麻豆原创鈥檚 AI agents, orchestrated by Joule, reason through challenges, solve complex problems, and drive efficiency,鈥 said Kari Briski, VP, Generative AI Software for Enterprise, NVIDIA. 鈥溌槎乖 plans to integrate NVIDIA Llama Nemotron reasoning models to enhance AI-driven automation and enable businesses to optimize operations.”

With the integration of NVIDIA Llama Nemotron reasoning models, 麻豆原创 will continue to advance the reasoning of Joule agents, redefining enterprise automation and allowing enterprises to scale automation with confidence and drive efficiency across end-to-end business processes.

The future of business AI

Together, 麻豆原创 and NVIDIA are redefining how enterprises use and benefit from AI鈥攂ringing automation, intelligence, and efficiency to every facet of business operations. From advanced AI agents that execute complex cross-functional workflows to AI-driven development and immersive digital experiences, the possibilities are expanding rapidly.

about the 麻豆原创 and NVIDIA collaboration today. Explore what AI can do for your .


Walter Sun is SVP and global head of AI at 麻豆原创.

Receive weekly news highlights from the 麻豆原创 News Center
]]>
AI in 2025: Five Defining Themes /2025/01/ai-in-2025-defining-themes/ Thu, 16 Jan 2025 11:15:00 +0000 /?p=230523 Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.

But what exactly lies ahead? We鈥檇 like to share five key themes for AI in 2025 that undoubtedly come with challenges for businesses but also the potential to redefine what鈥檚 possible. Ready to glimpse into next year and beyond? Let’s dive in.

Tap 麻豆原创 to achieve real-world results and attain your full potential with embedded AI capabilities that leverage your data responsibly

1.  Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems

AI agents are currently in their infancy. While many software vendors are releasing and labeling the first 鈥淎I agents鈥 based on simple conversational document search, that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.

Users will interact with a copilot for their tasks, which will deploy the request and coordinate among systems of multiple expert AI agents to complete more difficult tasks. Future AI agents, or , can collaborate to understand the business user, have all the context, and structure the problem to subsequently interact with these domain-specific expert AI agents — each performing specific sub-tasks that together complete a much more complex task. In the future, users will not even need to trigger an action. Instead, AI agents will proactively respond to business events such as incoming customer inquiries, supply chain disruptions, or demand surges. They will automatically prepare a decision workflow as far as they can before pinging the human user for feedback.

If we look at a five-year horizon, AI agents will simplify significant portions of workflows, even aspects that have been resistant to automation, such as exceptions in customer service, long-tail administrative tasks, and specific programming activities like coding or debugging software. AI agents will be flexible and can plan, fail, and try something else or self-correct based on reasoning. AI agents will handle and complete routine, repetitive tasks end-to-end as effectively and often even more effectively than humans, leading to increased productivity and demonstrable cost savings. Agents will be more adaptable and robust than conventional robotic process automation (RPA) for longtail and highly extensive tasks. This means figuring out the best result out of many possible outcomes, which is almost impossible to hardcode in an RPA algorithm with classical automation methods.

Adopting AI in these domains will also shift workforce dynamics, with human roles evolving to focus on anticipating uncommon scenarios, coping with ambiguity, factoring in human behavior, making strategic decisions, and driving genuine innovation — complemented, not replaced, by AI capabilities. 

In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.

2. Models: No Context, No Value

Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources. Model improvements in the future won鈥檛 come from brute force and more data; they will come from better data quality, more context, and the refinement of underlying techniques. Companies must spend more time innovating to make better models through fine-tuning and model adaptation rather than just training larger and larger models. Neurosymbolic AI techniques, especially knowledge graph, will see a renaissance since they can provide both learning objectives for foundation models and context to significantly improve the performance of generative AI while reducing hallucinations.

We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world, from warehouses to manufacturing plants, or models trained on tabular, structured data, like 麻豆原创 Foundation Model, and can handle tasks that LLMs cannot do well, like predictions of numeric values.

Models will increasingly become more multimodal, meaning an AI system can process information from various input types. AI applications will eventually evolve into 鈥渁ny-to-any鈥 modality solutions capable of understanding, processing, and reasoning across text, voice, image, video, and sensor data within a single model. In addition, smaller and more specialized LLMs with scalable finetuning techniques and the ability to work on any device will become more common, a trend that may lead to hyper-personalized models for organizations or even individuals in the future.

Enterprises will shift toward strategies utilizing multiple foundation models (not to be confounded with multimodal capabilities in a single model, described above), leveraging a diverse set of AI models and techniques tailored to specific use cases. This is backed by the trend of fine-tuning small slices of models, which requires fewer resources and much less data, resulting in full model flexibility and enabling businesses to extract more value from their unique data and gain a competitive edge. Enterprise software vendors will offer or extend integrated AI model marketplaces and platforms that support seamless model deployment, management, and updating. Benchmarking and lowering model switching costs will help deploy the same use cases in heterogeneous environments. Context equals value. Knowledge graph technology has been around for 40 years and is now seeing a revival because it can overcome key LLM challenges, such as understanding complex formats, hierarchy, and relationships between business data. Knowledge graphs offer data meaning and explain the relationship between entities, significantly supercharging the abilities of LLMs. The next step in this journey will be large graph models, allowing further advancement in generative AI.

Implicit knowledge is power, and making knowledge explicit to others is a superpower.

3. Adoption: From Buzz to Business

While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.

From a technological perspective, while 2024 saw significant advancements in AI, 2025 will see companies focus on making these advancements more meaningful through seamless data integration, ultimately enhancing the accuracy and significance of AI-powered outcomes and boosting adoption. Lastly, in 2025, we might glimpse a shift in the software business model from building static software features and functions to an outcome-as-a-service model focused on achieving process objectives.

4. User Experience: AI Is Becoming the New UI

AI鈥檚 next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.

This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won鈥檛 be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.” While power users will still have singular, expert interfaces, most users will demand flexibility across multiple access patterns. At the same time, there will be a growing acceptance of longer inference times for high-quality answers to complex, previously unsolvable problems and actions in domains requiring deep analysis and research. Ultimately, users will recognize the trade-off between latency and complexity of tasks handled by AI.

Importantly, we will see organizations move beyond viewing AI as a collection of productivity tools and begin reimagining their workforce as a network of collaborative intelligence with AI agents and humans working to accelerate innovation within the enterprise. For example, combining human expertise in strategic thinking with AI鈥檚 strengths in large-scale analysis and pattern recognition will create new competitive advantages for companies that effectively orchestrate these hybrid intelligence networks to drive breakthrough discoveries and market opportunities. Next year will also mark the early stages of a significant shift in how humans and AI work together, with agents evolving into workflow partners, taking initial steps toward independently navigating software environments and automating routine tasks 鈥 from data analysis and report generation to schedule coordination and software testing. This will also start a longer journey toward transformed work processes and patterns, with forward-thinking organizations developing new roles, metrics, and training approaches for effective human-AI task collaboration.

5. Regulation: Innovate, Then Regulate

It鈥檚 fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation. The regulatory landscape will become even more fragmented, with the tracking hundreds of AI regulations under discussion worldwide. This requires evaluating model compliance with and technical interpretation of various regulatory frameworks.

In 2025, the discussion will shift from what we try to regulate from a technical standpoint to how we innovate and what we deem fundamentally human. This discussion will elevate the role of humans, contribute a much more positive perspective, and help shape a long-term vision for how we want humanity and AI to live and work together. 

In this environment, it will continue to be critical for companies developing and deploying AI technology to adhere to responsible principles around safety, security, and ethical use. This will also help set the stage for important precedents and compliance.

Executing on the Themes in 2025

Indeed, these are just a few of what we are sure will be many exciting advancements for AI in 2025. Overall, the biggest takeaway from the year ahead will be making existing breakthrough technology more meaningful. We will see AI much deeper and almost invisibly embedded in consumer and enterprise applications and witness more advancements in how vendors and organizations that use these applications embed their individual contexts and data into AI seamlessly.

Getting to the point of leveraging AI generally, however, will require businesses to take advantage of a modern cloud suite with unified data access and harmonized data models to overcome data silos and fully benefit from AI innovation that spans across the whole enterprise. This will drastically increase the accuracy and significance of AI-powered outcomes, ultimately boosting adoption, specifically in the enterprise space.

We can鈥檛 wait to see what the future holds.


Sean Kask is vice president and head of AI Strategy for 麻豆原创 Business AI at 麻豆原创.
Walter Sun is senior vice president and global head of AI for 麻豆原创 Business AI at 麻豆原创.
Jonathan von Rueden is head of AI Frontrunner Innovation for 麻豆原创 Business AI at 麻豆原创.

Subscribe to the 麻豆原创 News Center newsletter for weekly news stories and highlights
]]>
Unlocking New Possibilities with Amazon Nova Models on 麻豆原创鈥檚 Generative AI Hub /2024/12/amazon-nova-models-sap-generative-ai-hub/ Tue, 03 Dec 2024 22:30:00 +0000 /?p=230251 At 麻豆原创,聽we are committed to delivering transformative AI technologies that drive meaningful business impact.听Today, at AWS re:Invent聽2024, we鈥檙e thrilled to unveil the immediate availability of Amazon鈥檚 new foundation models (FMs) 鈥 Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro 鈥 through 麻豆原创鈥檚 generative AI hub, a capability of 麻豆原创 AI Core, adding to existing access to Anthropic Claude and Amazon Titan models from Amazon Bedrock.

The generative AI hub enables enterprise customers to easily access聽a broad range of commercial and open-source large language models (LLMs) in a safe and secure environment. This milestone marks a new chapter in our long-standing partnership with AWS, combining 麻豆原创鈥檚 AI innovations and enterprise expertise with Amazon鈥檚 latest and most advanced AI capabilities and technology solutions聽to unlock powerful opportunities for businesses.

Click to enlarge.

Amazon Nova joins a growing portfolio of top-tier commercial and open-source models available through the generative AI hub. By making the new Amazon Nova models publicly available simultaneously with their release on AWS, 麻豆原创 helps ensure that customers can immediately leverage the latest innovations to build AI-driven solutions that harness the聽full聽business context in 麻豆原创 data.

Tailored AI Solutions for Every Need

The Amazon Nova release includes three state-of-the-art understanding models that are available today,聽enabling businesses to choose the聽right聽model for their unique needs:

  • Amazon Nova Micro: Text-only model that delivers the lowest latency responses at very low cost
  • Amazon Nova Lite: Very low-cost multimodal model that is lightning fast for processing image, video, and text inputs
  • Amazon Nova Pro: Highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks

With these models now accessible through the generative AI hub, developers can efficiently leverage them alongside the extensive capabilities of 麻豆原创 Business Technology Platform (麻豆原创 BTP) to build and scale AI-powered solutions that seamlessly work with 麻豆原创 applications.

Streamlined Access and Integration

麻豆原创鈥檚 generative AI hub helps simplify how businesses access and deploy advanced AI models.听The Amazon Nova models, alongside other聽leading offerings,聽are available through a unified interface deeply integrated with 麻豆原创 BTP.听This聽helps ensure customers can automate workflows聽with precision, build multimodal AI solutions tailored to specific challenges, and fully utilize the business context embedded in 麻豆原创 data.

By聽combining Amazon Nova models with 麻豆原创 BTP services like workflow automation and vector databases, businesses can develop customized AI solutions that can scale effortlessly.听For example, developers can create new skills for Joule, 麻豆原创鈥檚 AI copilot, leveraging Amazon Nova models and 麻豆原创 HANA鈥檚 vector capabilities to process enterprise-specific data.听This helps聽ensure AI outputs are聽not only accurate and relevant but also聽actionable, driving measurable results across operations.

麻豆原创 and AWS: Unlocking the Future of Business AI

The availability of Amazon Nova models on the generative AI hub marks a significant milestone in 麻豆原创鈥檚 long-standing collaboration with AWS. Together, we continue to address critical enterprise challenges such as data security, regulatory compliance, and scalability, ensuring businesses can confidently adopt AI solutions that deliver tangible value. This partnership underscores our shared vision of making advanced AI accessible and impactful for businesses worldwide, unlocking new levels of efficiency and innovation.

With access to Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro, 麻豆原创 customers can reimagine their operations 鈥 automating workflows, enhancing insights, and driving complex decisions. We鈥檙e excited to see how our customers and partners leverage these capabilities to achieve impactful business outcomes.

Learn more about innovating with and the new .


Get business-ready, trusted, and intuitive AI capabilities
]]>
麻豆原创 Expands Business AI Portfolio with Meta Open-Source Models /2024/06/sap-business-ai-meta-open-source-models/ Tue, 04 Jun 2024 11:59:00 +0000 /?p=225385 As part of its business AI strategy, 麻豆原创 provides access to AI models through the generative AI hub in 麻豆原创 AI Core on 麻豆原创 Business Technology Platform (麻豆原创 BTP). By integrating advanced AI capabilities into the business context of organizations, customers benefit from solutions that are relevant, reliable, and responsible.

麻豆原创 Sapphire: Taking Business to the Next Level in the Era of AI

Grounded in an AI ethics framework, 麻豆原创 collaborates with partners to create insights from industry-specific data and deep process knowledge, driving exponential value for businesses. By fostering a multi-partner approach, 麻豆原创 ensures flexibility and prevents vendor lock-in, empowering customers and partners to navigate the AI landscape effectively. 

As a next step in that journey, 麻豆原创 is integrating the Meta Llama 2 and Meta Llama 3 models into the generative AI hub to enable customers to create dashboards based on rich content with Llama qualitative conversational outputs, and to explore use cases built with Meta Llama 3.  

The provides an overview of 麻豆原创鈥檚鈥痓usiness AI strategy, highlighting the unparalleled opportunity for the 麻豆原创 ecosystem to build an array of solutions with generative AI capabilities and extensions on 麻豆原创 BTP.听

麻豆原创 Analytics Cloud AI Portfolio Expansion 

麻豆原创 will make Joule available in 麻豆原创 Analytics Cloud to let planning and analytics users get work done faster and drive better business outcomes in a secure and compliant way. The wide range of capabilities offered by Meta Llama 3 will enable Joule users to get assistance with key activities, including the auto-generation of custom scripts to extend dashboards, as well as delivery of the most accurate scripts directly in 麻豆原创 Analytics Cloud.听聽

The goal is to integrate generative AI into major workflows of 麻豆原创 Analytics Cloud. Transforming these workflows will unlock innovative possibilities for distinct planning and analytics roles that are common in organizations globally. More information on how 麻豆原创 Analytics Cloud with Joule will help planning and analytics users get work done more efficiently .听

Availability of Meta Llama 3 on Generative AI Hub 

麻豆原创 will make Llama 3 available in Q2 2024 in the generative AI hub in 麻豆原创 AI Core given customer interest in Llama 3. The generative AI hub makes it simple to build generative AI use cases for 麻豆原创 applications, thanks to its value-adding features around 麻豆原创 integration and the ability to orchestrate language model interactions.

Adding Meta Llama 3 to the generative AI hub allows developers in the 麻豆原创 ecosystem to leverage its value-adding features. The 70B variant of Meta Llama 3 will be the largest open-source model available on the generative AI hub.听

麻豆原创 SuccessFactors Text-to-Image Generation Requirements 

麻豆原创 customers are exploring ways to enhance the visual appeal and relevance of images within the 麻豆原创 SuccessFactors HCM suite. The current process of curating and uploading images can be time-consuming and costly, often resulting in generic images that only partially cater to specific scenarios while also incurring expenses associated with stock image licensing.

Integrating the latest iteration of Meta’s open-source large language model (LLM) offers numerous benefits. Creative features from models such as Llama 3 come into play in a scenario like this, empowering users to generate custom images that better meet their needs within 麻豆原创 SuccessFactors software. Whether it is customizable course content thumbnails and banners for learning, tailored images for assignments in 麻豆原创 SuccessFactors Opportunity Marketplace, unique images for different themes on the home page, or personalized profile and banner images on people profile, the impact will be far reaching.听

By streamlining the image creation process, reducing costs, and ensuring that every image is suited to its context, this feature will undoubtedly improve user engagement and the overall platform experience. 麻豆原创 customers should get ready to explore a new world of custom imagery tailored to their exact specifications, sparkling excitement and intrigue among stakeholders.


Walter Sun is senior vice president and global head of AI at 麻豆原创 SE.

Get the latest news and coverage from 麻豆原创 Sapphire in 2024
]]>
麻豆原创鈥檚 Partnership with Mistral AI, One of the Leading LLM Makers /2024/06/sap-mistral-ai-leading-llm-maker-partnership/ Tue, 04 Jun 2024 11:57:00 +0000 /?p=225383 At 麻豆原创, we’re always looking to the next wave of technological innovation, especially when it comes to enhancing the capabilities of 麻豆原创 applications and enterprise software through AI. To further the value we bring to customers, we鈥檙e excited to announce the news of our latest partnership with Mistral AI, a trailblazer in the field of large language models (LLMs).

麻豆原创 Sapphire: Taking Business to the Next Level in the Era of AI

This collaboration is more than just a meeting of minds; it’s a symbiotic combination of AI expertise and technology that opens a world of possibilities for 麻豆原创 customers.

Mistral AI’s success in developing advanced LLMs, including its renowned open-weight models Mixtral 8x7B and Mixtral 8x22B, and more expansive enterprise-grade “Large” model, is set to complement the 麻豆原创 suite of AI-enabled solutions. The collaboration will enable direct accessibility to Mistral AI鈥檚 models through 麻豆原创 or through 麻豆原创 Business Technology Platform (麻豆原创 BTP) applications with generative AI capabilities.

What does this mean for 麻豆原创 customers? Simply put, it’s about empowerment and access to AI from a European LLM provider. Access to Mistral AI’s latest models through the generative AI hub in 麻豆原创 AI Core will enable 麻豆原创 customers to enhance productivity, streamline their operations, and accelerate their digital transformation journey.

Whether through integrating AI with 麻豆原创 BTP or developing bespoke solutions through direct access to Mistral AI LLMs, the potential for innovation is limitless.听

“We are excited about entering a partnership with Mistral AI and making the company鈥檚 LLM accessible to both our developers and our customers through the generative AI hub in 麻豆原创 AI Core on 麻豆原创 BTP,” said Philipp Herzig, chief AI officer of 麻豆原创 SE. “Together, we can truly make a difference by building AI-enabled solutions that create immediate value for users, organizations, and entire industries. We are particularly proud that two European technology companies are collaborating on bringing AI forward.”

“We are pleased to embark on this partnership with 麻豆原创,鈥 said Arthur Mensch, CEO of Mistral AI.听“We foresee the new horizons this collaboration will open up, enabling us to further our mission of making AI accessible to all. We are looking forward to witnessing the potential of our AI models to support innovation and streamline operations for 麻豆原创’s customers.”

The ambitions don’t stop there: 麻豆原创 and Mistral AI are committed to exploring new applications of AI across various industries. By leveraging the combined strengths, this is not just about driving innovation, but about creating new business opportunities and delivering tangible value to 麻豆原创 customers.听

Stay tuned as we begin this exciting journey together. The future of enterprise software is bright, and with partners like Mistral AI, we are ready to illuminate the path forward.听


Walter Sun is senior vice president and global head of AI at 麻豆原创 SE.

Get the latest news and coverage from 麻豆原创 Sapphire in 2024
]]>
Generative AI-Enabled Developers Are the Architects of the Future /2023/12/generative-ai-enabled-developers-architects-of-the-future/ Wed, 13 Dec 2023 13:15:00 +0000 /?p=220880 Artificial intelligence, particularly generative AI, continues to reinvent how we run our businesses and shape the ways people work.

With Gartner finding more than 80% of enterprises using generative AI application programming interfaces (APIs) or models, and/or deploying generative AI-enabled applications in production environments by 2026, we know there is vast opportunity ahead of us to build the tools and layout the pathways to enable every developer to make an impact.

We believe AI will have a profound effect on the way engineers and developers work in three fundamental ways. 

  • First, how we build our software and applications
  • Second, what we build into the DevOps cycle tools and platform
  • And finally, who can code using natural language conversations, extending programming beyond low coders to no coders

Development will become even more democratized to anyone with great ideas and creativity.

麻豆原创 Business AI: Revolutionary technology, real-world results

At 麻豆原创, we are at the forefront of business AI. 麻豆原创 Business AI is about achieving real business results for the enterprise and making AI relevant, reliable, and responsible.

We have embedded AI capabilities into our solutions including Joule, our new generative AI copilot, which understands your business. 

We are working relentlessly to bring these generative AI use cases to our customers, including educating 麻豆原创 employees on best practices and providing AI tools internally to accelerate generative AI-embedded product development. We are doing the same for developers. Here are three specific ways 麻豆原创 is empowering every developer to become an AI developer.

Joule: Natural Language Programming

, our generative AI copilot, revolutionizes the way users interact with 麻豆原创 business systems and data. With Joule, users can easily navigate across 麻豆原创 applications, find information, execute transactions, collaborate with colleagues, and receive proactive recommendations.  

Instead of conducting laborious manual searches, Joule users can reduce their average search and query time by up to 80% by asking natural language questions and receiving intelligent answers on the wealth of business data and insights available across the 麻豆原创 portfolio.

麻豆原创 Build Code

Our new application development solution, , draws on the power of Joule to create data models, application logic, and test scripts. 麻豆原创 Build Code provides a simplified developer experience that drives productivity and supports fusion development by teams of professional and citizen developers.

AI Foundation on 麻豆原创 Business Technology Platform

We believe that business AI capabilities need to be directly embedded into business applications and extensions. The  is a one-stop shop for developers to do exactly that, providing ready-to-use AI services and tools to accelerate the development of generative AI-infused applications, in a secure and trusted way. 

AI Foundation includes everything developers need to run their business-ready AI in the applications they have built on 麻豆原创 BTP, from ready-to-use AI services to AI runtime and lifecycle management. We provide the tooling for the management of generative AI capabilities and ensure business data connectivity. We designed it with security, governance, and trust from the ground up. 

Looking Forward: The Right Model for the Right Business Need

In early 2024, the AI Foundation will also include a generative AI hub, giving developers instant access to a broad range of large language models (LLMs) from different providers, such as OpenAI鈥檚 ChatGPT or Falcon-40b. We will soon add more models.

With this, developers can experiment by submitting a prompt to multiple LLMs to compare the generated outcomes and identify the best model for the task. They can then power mission-critical processes with complete control and transparency with features like built-in prompt history, all delivered with enterprise-grade security and data privacy. 

Our generative AI hub will also provide tooling for developers to supercharge their generative AI development processes and access to tools and techniques to extract the most relevant information from business and other data.

To get the most value in traditional business areas such as finance, sales, or supply chain, some enterprise customers require foundation models designed specifically for business context. 麻豆原创 is committed to fine-tuning generic LLMs on 麻豆原创 anonymized data and, in the longer term, to creating proprietary foundation models based on our vast structured business data. 

These models will be able to address questions that users face every day in business that LLMs cannot, such as predicting invoice payment dates and supplier delivery quality or proposing efficiencies to a business process. 

At 麻豆原创, we are building the tools that will enable every developer to become a business AI developer, and in the process to become the true architects of the future.


Walter Sun is global head of AI at 麻豆原创.

Get the latest 麻豆原创 news and stories delivered right to your inbox each week
]]>