Google Cloud simplifies AI deployment for businesses in Southeast Asia

As businesses look to develop and deploy generative AI in their organization, Google Cloud provides the necessary capabilities needed to ensure a successful implementation.

Southeast Asia’s generative AI development and deployment continues to grow exponentially, and Google Cloud is making sure they play a key role in this journey. Since the company first started offering its cloud services in the region, Google Cloud is making sure that businesses understand how they can best build and deploy generative AI solutions.

According to Caroline Yap, Managing Director for Global AI Business at Google Cloud, generative AI for enterprises basically goes down to an organization’s growth, efficiency and plans for the future.

“Understanding the organization’s growth plans is key. From there, they will see the need to be efficient and be prepared for the future. Today, for example, CMOs are using AI tools through their existing programs and tech they have. But the real value of generative AI is transformation, and that comes from the cloud. Google Cloud provides the tools needed to achieve this,” said Yap.

While Yap acknowledged that the pandemic did indeed accelerate the adoption of emerging technologies, the reality is, organizations, including governments are understanding the need to have such capabilities.

For example, in the US, 69% of government agencies are already using AI for data analysis to help decision making. 61% also said AI is helping automate processes while another 56% said AI is already delivering citizen services.

However, despite the growth there are still challenges that need to be addressed, which are similar all around the world. Yap pointed out the biggest challenge is from the culture and people in an organization and less on the technology itself. A lot of people fear the uncertainty generative AI brings onto their role.

“With any technology, there are different tasks that will be changed. But the key elements to use technology still need humans. With generative AI, it may be powerful, but it doesn’t mean it will replace everything. For example, for regulated industries, they need to understand where the generative AI tool ends and where humans need to be part of it. And for humans, it augments jobs and tasks. It removes the mundane tasks and it's down to the people to make it work, assuming everyone is onboard,” explained Yap.

On concerns of data privacy, Yap also pointed out that enterprises own their data when using generative AI applications that are built using Vertex AI. In fact, most businesses are often worried about regulatory concerns when it comes to using AI. While there are some guidelines by governments on the use of AI, the reality is it goes down to a company’s data management strategy as well.

Echoing Yap’s sentiments is Chester Chua, Cloud Policy for Singapore and AI Policy Lead for APAC for Google Cloud. Chua explained that when it comes to policy makers for AI, the top concerns are about data governance, misinformation or bias, safety and security as well as risk management.

“Our job at Google is to make it easier for companies to adopt AI by taking away these complications. For example, concerns on copyright content. What happens if content generated is similar to copyrighted content? Google Cloud will take responsibility for any third-party generated concerns. Say for example, a generated content happens to infringe copyright. Google Cloud will take care of it. We developed it, we know what data we used; hence we will be responsible. This is how we take concerns away to contribute to the ecosystem,” explained Chua.

AI in Cybersecurity

As generative AI can be implemented to almost any sector in an organization, one area that is showing increasing interest is cybersecurity. Generative AI in cybersecurity is not only capable of helping businesses have better cybersecurity management, but also allow organizations to improve their overall cybersecurity posture.

Mark Johnston, Director, Office of CISO for Google Cloud APAC (picture above) stated that there are three pillars for AI and Security at Google. They are AI threats, AI-driven security and Securing AI by Default. So far, Google’s use of AI in cybersecurity has stopped threats through auto generated security controls, policies and configurations as well as providing less toil as the system is capable of securing themselves. This is done through AI-powered remediation with Frontline Threat Intel.

At the same time, with shortage in cybersecurity talent remaining a problem for organizations, AI in cybersecurity is capable of democratizing security expertise. This means, any IT professional is capable of understanding and managing a company’s cybersecurity framework with generative AI.

“We create the same security capabilities everywhere and remain consistent in all industries. There are countless use cases for generative AI in defense that are tailored to each organization. There is no one size that fits model for all,” said Johnston.

Looking at Google Gemini, in a whitepaper published earlier this year, Johnston stated that Gemini was able to successfully patch 15% of simple vulnerabilities found by sanitizers. He also stated that generative AI was able to help incident teams write incident summaries 51% faster.

Interestingly, all this also goes down to having a structured process. And for this to happen, there needs to be a structured data management capability. For CISOs, the top three risks they face in deploying and scaling AI are software lifecycle risk, data governance risk and operational risks. Data governance is key in dealing with this.

Understanding LLMs and hallucinations

Another concern businesses have when applying generative AI to their products and services is hallucinations. Google Cloud defines AI hallucinations as incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.

This is also why data management is imperative when developing AI models. Yap also recommends that businesses use smaller and domain specific models to reduce the risks of hallucinations. As the AI will only continue to generate answers to queries, managing the data and setting domain specific models will solve the problem.

“Large language models (LLM) are like a toddler’s brain. They understand the type of question asked but is it possible to get the answer based on the data they have? Hence, there is a need to use the right size models for the right outcomes,” said Yap.

As such, Yap highlighted that this is where the grounding and embeddings are key. It will set the data to provide the right answer. Otherwise, it’s going to give hallucinations.

“Grounding and embedding reduce hallucination and increase accuracy. Humans in the loop are able to give feedback. A LLM takes what it sees and gives it to you based on the prompt,” explained Yap.

The reality is, 80% to 90% of enterprise data is unstructured and to use generative AI, businesses don’t need the perfect data set environment to have value. However, they do need to set the domain for how the specific AI use case.

For businesses to achieve this, Yap stated that this is where Google Cloud’s partner ecosystem plays an important role. They are key to augmenting team strengths in understanding LLMs. Businesses need to work with a partner that knows how to build the right AI and data infrastructure. This includes understanding the need to work with third-party or open-source models.

“For us, it’s a platform. Enterprise customers have the flexibility to adopt and use Vertex AI even if they are using a multi-cloud strategy. This means that where the data sits, it can work. More importantly, your data is where you are,” concluded Yap.