Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close
Thank you for contacting us. A member of our team will be in touch shortly. Close
Discover the benefits of using Ubuntu for open-source AI and how to seamlessly deploy models on Azure, including leveraging GPU and Confidential Compute capabilities. ...
In our previous post, we discussed how to generate Images using Stable Diffusion on AWS. In this post, we will guide you through running LLMs for text generation in your own environment with a GPU-based instance in simple steps, empowering you to create your own solutions. Text generation, a trending focus in generative AI, facilitates ...
This blog post will show you how to run one of the most used Generative AI models for Image generation on Ubuntu on a GPU-based EC2 instance on AWS ...
Eight trends to keep an eye on this Artificial Intelligence Appreciation Day On 16 July the world celebrates International Artificial Appreciation Day. In the previous century, science fiction often covered topics and inventions that are now closer to science fact, such as humanoid robots. In the 50s, artificial intelligence met both grea ...
The latest release of Canonical’s end-to-end MLOps platform brings advanced AI/ML training capabilities 8 September 2022- Canonical, the publisher of Ubuntu, announces today the release of Charmed Kubeflow 1.6, an end-to-end MLOps platform with optimised complex model training capabilities. Charmed Kubeflow is Canonical’s enterprise-rea ...
In this post we’ll explore the concepts of data lake, data hub and data lab. There are many opinions and interpretations of these concepts, and they are broadly comparable. In fact, many might say they’re synonymous and we’re just splitting hairs. Let’s look again. ...
AI/ML model training is becoming more time consuming due to the increase in data needed to achieve higher accuracy levels. This is compounded by growing business expectations to frequently re-train and tune models as new data is available. The two combined is resulting in heavier compute demands for AI/ML applications. This trend is set t ...
Deploying AI/ML solutions in latency-sensitive use cases requires a new solution architecture approach for many businesses. Fast computational units (i.e. GPUs) and low-latency connections (i.e. 5G) allow for AI/ML models to be executed outside the sensors/actuators (e.g. cameras & robotic arms). This reduces costs through lower hardware ...
Artificial Intelligence and Machine Learning adoption in the enterprise is exploding from Silicon Valley to Wall Street with diverse use cases ranging from the analysis of customer behaviour and purchase cycles to diagnosing medical conditions. Following on from our webinar ‘Getting started with AI’, this webinar will dive into what succe ...
From the smallest startups to the largest enterprises alike, organisations are using Artificial Intelligence and Machine Learning to make the best, fastest, most informed decisions to overcome their biggest business challenges. But with AI/ML complexity spanning infrastructure, operations, resources, modelling and compliance and security, ...
Kubeflow, the Kubernetes native application for AI and Machine Learning, continues to accelerate feature additions and community growth. The community has released two new versions since the last Kubecon – 0.4 in January and 0.5 in April – and is currently working on the 0.6 release, to be released in July. The key features in ...