Sun, 8 Dec 2024

 

Microsoft announces 100 new services, updates to help accelerate AI transformation
 
From: Susanna Ray, Microsoft
Sat, 25 Nov 2023   ||   Nigeria,
 

Microsoft has said it introduced around 100 new services and updates as part of its AI-forward strategy, including key developments within its productivity and security offerings.

The announcements were made at Microsoft Ignite 2023 – the company’s annual conference for developers and IT professionals, according to a statement by the company.

“As we reach the end of 2023, nearly every industry in Africa is undergoing a collective transformation, with estimates that AI could expand the economy as much as 50 percent of current GDP by 2030 if the continent could capture just 10 percent of the global AI market.

“Forward-thinking organisations and enterprising startups alike are discovering entirely new ways of working and harnessing the power of AI to address some of society’s most daunting challenges—from improving access to quality legal services in South Africa to extending the reach of healthcare professionals in low-resourced communities in Nigeria.”

Microsoft Ignite is a showcase of the advances being developed to help customers, partners and developers achieve the total value of Microsoft’s technology and reshape the way work is done, the statement said.

 

There are strong signals of AI’s potential to transform work across the continent. As it stands, more than half of employees in Africa and the Middle East say they would change their minds about seeking out a new job if their current employer invested in new technology like automation. Eight months ago, Microsoft introduced Copilot for Microsoft 365 to reduce digital debt and increase productivity so people can focus on work that is uniquely human. Already, the company’s research, from a combination of surveys and experiments, demonstrates significant productivity gains:

The company said 70 percent of Copilot users said they were more productive, and 68 percent said it improved the quality of their work; 68 percent say it helped jumpstart the creative process.

Overall, users were 29 percent faster at specific tasks (searching, writing and summarising). The company’s latest announcements are geared towards helping accelerate existing progress, enabling faster and more profound transformation across sectors. Key updates include:

 

Rethinking cloud infrastructure

Microsoft has led with groundbreaking advances like partnerships with OpenAI and the integration of ChatGPT capabilities into tools used to search, collaborate, work and learn. As we accelerate further into AI, Microsoft is rethinking cloud infrastructure to ensure optimisation across every layer of the hardware and software stack.

At Ignite we are announcing new innovations across our datacentre fleet, including the latest AI optimised silicon from our industry partners and two new Microsoft-designed chips.

Microsoft Azure Maia, an AI Accelerator chip designed to run cloud-based training and inferencing for AI workloads such as OpenAI models, Bing, GitHub Copilot and ChatGPT.

Microsoft Azure Cobalt, a cloud-native chip based on Arm architecture optimised for performance, power efficiency and cost-effectiveness for general purpose workloads.

Additionally, we are announcing the general availability of Azure Boost, a system that makes storage and networking faster by moving those processes off the host servers onto purpose-built hardware and software.

Complementing our custom silicon, we are expanding partnerships with our silicon providers to provide infrastructure options for customers.

We’ll be adding AMD MI300X accelerated virtual machines (VMs) to Azure. The ND MI300 VMs are designed to accelerate the processing of AI workloads for high range AI model training and generative inferencing, and will feature AMD’s latest GPU, the AMD Instinct MI300X.

The preview of the new NC H100 v5 Virtual Machine Series built for NVIDIA H100 Tensor Core GPUs, offering greater performance, reliability and efficiency for mid-range AI training and generative AI inferencing. We’re also announcing plans for the ND H200 v5 Virtual Machine Series, an AI-optimised VM featuring the upcoming NVIDIA H200 Tensor Core GPU.

Extending the Microsoft Copilot experience

To go beyond individual productivity, we are extending Microsoft Copilot offerings across solutions to transform productivity and business processes for every role and function.

Top Copilot-related announcements include: Microsoft Copilot for Microsoft 365: The new Microsoft Copilot Dashboard shows customers how Copilot is impacting their organisation. To empower teamwork, new features for Copilot in Outlook help users prep for meetings, and during meetings, new whiteboarding and note-taking experiences for Copilot in Microsoft Teams keep everyone on the same page.

Microsoft Copilot Studio: This is a new end-to-end conversational AI platform that allows organisations to build their own copilots from scratch or adapt out-of-the-box copilots with their own data, logic, and actions relevant to their business needs.

Bringing Copilot to everyone: Bing Chat and Bing Chat Enterprise will now simply become Copilot. With these changes, when signed in with a Microsoft Entra ID, customers using Copilot in Bing, Edge and Windows will receive the benefit of commercial data protection.

Unlocking more value for developers with Azure AI

We continue to expand choice and flexibility in generative AI models to offer developers the most comprehensive selection. With Model-as-a-Service, a new feature in the model catalog we announced at Build, pro developers will be able to easily integrate latest AI models such as Llama 2 from Meta and upcoming premium models from Mistral and Jais from G42 as API endpoints to their applications. They can also customise these models with their own data without needing to worry about setting up and managing the GPU infrastructure, helping eliminate the complexity of provisioning resources and managing hosting. With the preview of Azure AI Studio, there is now a unified and trusted platform to help organisations more easily explore, build, test and deploy AI apps – all in one place. With Azure AI Studio, you can build your own copilots, train your own, or ground other foundational and open-source models with data that you bring.

And Vector Search, a feature of Azure AI Search, is now generally available, so organisations can generate highly accurate experiences for every user in their generative AI applications.

The new GPT-3.5 Turbo model with a 16K token prompt length will be generally available and GPT-4 Turbo will be in public preview in Azure OpenAI Service at the end of November 2023, GPT-4 Turbo will enable customers to extend prompt length and bring even more control and efficiency to their generative AI applications.

 

GPT-4 Turbo with Vision is coming soon to preview and DALLE·3 is now available in public preview in Azure Open AI Service, helping fuel the next generation of enterprise solutions along with GPT-4, so organisations can pursue advanced functionalities with images. And when used with our Azure AI Vision service, GPT-4 Turbo with Vision even understands video for generating text outputs, furthering human creativity.

Introducing new experiences in Windows to empower employees, IT and developers

To further our mission of making Windows the home for developers and the best place for AI development, we announced a host of new AI and productivity tools for developers, including Windows AI Studio.

Announcing NVIDIA AI foundry service

Aimed at helping enterprises and startups supercharge the development, tuning and deployment of their own custom AI models on Microsoft Azure, NVIDIA will announce their AI foundry service running on Azure. The NVIDIA AI foundry service pulls together three elements – a collection of NVIDIA AI Foundation models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing and services – that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their models with NVIDIA AI Enterprise software on Azure to power generative AI applications, including intelligent search, summarisation and content generation.

Strengthening defenses in the era of AI

Microsoft is combining the power of leading solutions in SIEM, XDR and generative AI for security into the first unified security operations platform to help defenders by simplifying the complexity of their environment. We are also adding new embedded experiences of Security Copilot across the Microsoft Security portfolio.  

10 AI Terms Everyone Should Know

By Susanna Ray

The term “AI” has been used in computer science since the 1950s, but most people outside the industry didn’t start talking about it until the end of 2022. That’s because recent advances in machine learning led to big breakthroughs that are beginning to have a profound impact on nearly every aspect of our lives. We’re here to help break down some of the buzzwords so you can better understand AI terms and be part of the global conversation.

1. Artificial intelligence

Artificial intelligence is basically a super-smart computer system that can imitate humans in some ways, like comprehending what people say, making decisions, translating between languages, analyzing if something is negative or positive, and even learning from experience. It’s artificial in that its intellect was created by humans using technology. Sometimes people say AI systems have digital brains, but they’re not physical machines or robots — they’re programs that run on computers. They work by putting a vast collection of data through algorithms, which are sets of instructions, to create models that can automate tasks that typically require human intelligence and time. Sometimes people specifically engage with an AI system — like asking Bing Chat for help with something — but more often the AI is happening in the background all around us, suggesting words as we type, recommending songs in playlists and providing more relevant information based on our preferences.

2. Machine learning

If artificial intelligence is the goal, machine learning is how we get there. It’s a field of computer science, under the umbrella of AI, where people teach a computer system how to do something by training it to identify patterns and make predictions based on them. Data is run through algorithms over and over, with different input and feedback each time to help the system learn and improve during the training process — like practicing piano scales 10 million times in order to sight-read music going forward. It’s especially helpful with problems that would otherwise be difficult or impossible to solve using traditional programming techniques, such as recognizing images and translating languages. It takes a huge amount of data, and that’s something we’ve only been able to harness in recent years as more information has been digitized and as computer hardware has become faster, smaller, more powerful and better able to process all that information. That’s why large language models that use machine learning — such as Bing Chat and ChatGPT — have suddenly arrived on the scene.

3. Large language models

 

Large language models, or LLMs, use machine learning techniques to help them process language so they can mimic the way humans communicate. They’re based on neural networks, or NNs, which are computing systems inspired by the human brain — sort of like a bunch of nodes and connections that simulate neurons and synapses. They are trained on a massive amount of text to learn patterns and relationships in language that help them use human words. Their problem-solving capabilities can be used to translate languages, answer questions in the form of a chatbot, summarize text and even write stories, poems and computer code. They don’t have thoughts or feelings, but sometimes they sound like they do, because they’ve learned patterns that help them respond the way a human might. They’re often fine-tuned by developers using a process called reinforcement learning from human feedback (RLHF) to help them sound more conversational.

4. Generative AI

Generative AI leverages the power of large language models to make new things, not just regurgitate or provide information about existing things. It learns patterns and structures and then generates something that’s similar but new. It can make things like pictures, music, text, videos and code. It can be used to create art, write stories, design products and even help doctors with administrative tasks. But it can also be used by bad actors to create fake news or pictures that look like photographs but aren’t real, so tech companies are working on ways to clearly identify AI-generated content.

5. Hallucinations

Generative AI systems can create stories, poems and songs, but sometimes we want results to be based in truth. Since these systems can’t tell the difference between what’s real and fake, they can give inaccurate responses that developers refer to as hallucinations or confabulations — much like if someone saw what looked like the outlines of a face on the moon and began saying there was an actual man in the moon. Developers try to resolve these issues through “grounding,” which is when they provide an AI system with additional information from a trusted source to improve accuracy about a specific topic. Sometimes a system’s predictions are wrong, too, if a model doesn’t have current l doesn’t have current information after it’s trained.

6. Responsible AI

Responsible AI guides people as they try to design systems that are safe and fair — at every level, including the machine learning model, the software, the user interface and the rules and restrictions put in place to access an application. It’s a crucial element because these systems are often tasked with helping make important decisions about people, such as in education and healthcare, but since they’re created by humans and trained on data from an imperfect world, they can reflect any inherent biases. A big part of responsible AI involves understanding the data that was used to train the systems and finding ways to mitigate any shortcomings to help better reflect society at large, not just certain groups of people.

7. Multimodal models

A multimodal model can work with different types, or modes, of data simultaneously. It can look at pictures, listen to sounds and read words. It’s the ultimate multitasker! It can combine all of this information to do things like answer questions about images.

8. Prompts

A prompt is an instruction entered into a system in language, images or code that tells the AI what task to perform. Engineers — and really all of us who interact with AI systems — must carefully design prompts to get the desired outcome from the large language models. It’s like placing your order at a deli counter: You don’t just ask for a sandwich, but you specify which bread you want and the type and amounts of condiments, vegetables, cheese and meat to get a lunch that you’ll find delicious and nutritious.

 

9. Copilots

 

A copilot is like a personal assistant that works alongside you in all sorts of digital applications, helping with things like writing, coding, summarizing and searching. It can also help you make decisions and understand lots of data. The recent development of large language models made copilots possible, allowing them to comprehend natural human language and provide answers, create content or take action as you work within different computer programs. Copilots are built with Responsible AI guardrails to make sure they’re safe and secure and are used in a good way. Just like a copilot in an airplane, it’s not in charge — you are — but it’s a tool that can help you be more productive and efficient.

 

10. Plugins

 

Plugins are like relief pitchers in baseball — they step in to fill specific needs that might pop up as the game develops, such as putting in a left-handed pitcher when a left-handed hitter steps up to the plate for a crucial at-bat. Plugins enable AI applications to do more things without having to modify the underlying model. They are what allow copilots to interact with other software and services, for example. They can help AI systems access new information, do complicated math or talk to other programs. They make AI systems more powerful by connecting them to the rest of the digital world.

 

 

Tag(s):
 
 
Back to News