As vendor events go, Microsoft Build is one of the more interesting because it focuses on the people who create things.

While Build is mostly about software, there’s usually a considerable amount of information on hardware that can be, at times, revolutionary. Major breakthroughs for both software and hardware don’t typically happen at the same show, but this year we had new ARM-based, four-processor PCs and AI applications that address what is the most pervasive problem in computing that has been largely unaddressed since its creation: Enabling users to interact easily and naturally with PCs.

Also read: Top Artificial Intelligence (AI) Software 2022

Project Volterra

The hardware announcement was Project Volterra, which boasts four processors, two more than the typical CPU and GPU we’ve known for years. The third processor is called a Neural Processing Unit focused on AI loads and handles them faster while using far less energy than CPUs or GPUs, according to Microsoft.

The fourth processor I’m calling an ACU, or Azure Compute Unit, and it is in the Azure Cloud. This is arguably the first hybrid PC sharing load between the cloud and the device, which is stackable if more localized performance is needed. Volterra may look like a well provisioned small-form factor PC. However, while it’s targeted at creating native Windows ARM code, it is predictive of the ARM PCs we’ll see on the market once this code is available.

Useful AI

As fantastic as this new hardware is, Microsoft is a software company with a deep history in development tools that goes all the way back to its roots. A huge problem computing has had since its inception is that people have to learn how to interact with the machines, which makes no sense in an ideal world.

Why would you build a tool that people have to work with and then create programming languages that require massive amounts of training? Why not put in the extra work and do it so we can communicate with them like we communicate with each other? Why not create a system to which we can explain what we want and have the computer create it?

Granted a lot of us have trouble explaining what we want, but at least getting training in doing that better would have broad positive implications for our ability to communicate overall, not just communicate with computers. In short, having computers respond to natural language requests would force us to train people how to generally communicate better, leading to fewer conflicts, fewer mistakes, and far deeper and more understanding relationships, not just with computers, but with each other. Something I think you can agree we need now.

Also read: Microsoft Embraces the Significance of Developers

GitHub Co-Pilot

The featured offering is a release coming from GitHub called Co-Pilot, which collaboratively builds code using an AI. It will anticipate what needs to be done and suggest it, and it will provide written code that corresponds to the coder’s request. Not sure how to write a command? Just ask how one would be done and Co-Pilot will provide the answer.

Microsoft provided examples of several targeted AI-driven Codex prototypes as well. One seemed to go farther by creating more complete code, while another, used for web research, didn’t just identify the source but would pull out the relevant information and summarize it. I expect this capability will find its way underneath digital assistants, making them far more capable of providing complete answers in the future.

A demonstration that really caught my attention was on OpenAI’s DALL-E (pronounced Dolly). This is a prototype program that will create an image based on your description. One use: Young schoolchildren who use their imaginations to describe a picture of an invention they had thought up, which led to shoes made of recycled trash, a robotic space trash collector, and even a house kind of like the Jetson’s apartment that could be raised or lowered according to the weather.

Right now, due to current events, I’m a bit more focused on children this week, but I think a tool like this could be an amazing way to visualize ideas and convey ideas better. They say a picture is worth a thousand words; this AI could create that picture with just a few words. While cartoonish initially (this can be addressed with several upscaling tools from companies like AMD and NVIDIA), they nevertheless excited and enthralled the kids. It was also, I admit, magical for me.

A Useful Future for AI

Microsoft Build showed me the best future of AI. Applied not for weapons or to convince me to buy something I don’t want (extended car insurance anyone?), but to remove the drudgery from coding, enabling more people with less training to create high-quality code, translate imagination into images and make digital assistants much more useful.

I’ve also seen the near-term future of PCs, with quad processors, access to the near unlimited processing power of the web (including Microsoft Azure Supercomputers when needed), and an embedded AI that could use the technology above to help that computer learn, for once, how to communicate with us and not the other way around.

This year’s Microsoft Build was, in a word, extraordinary. The things they talked about will have a significant, and largely positive, impact on our future.

Read next: Using Responsible AI to Push Digital Transformation

The post Microsoft Build Showcases 4-Processor PCs and Useful AI Apps appeared first on IT Business Edge.

IT Business Edge

Leave a Reply

Your email address will not be published. Required fields are marked *