CES 2026: LG Makes Advances in Home Robot Category

LG Electronics hinted at its intentions to enter the home robot space – rather than just demonstrating a concept – on the CES show floor.

The LG CLOiD home robot performed household tasks such as folding laundry and understanding the home environment by syncing with other LG appliances, communicating with the fridge about ingredients inside, switching on the oven after recognising that the planned recipe required pre-heating, and turning on the air conditioning.

The robot moves on a wheeled base and uses human style arms and hands with actuating fingers.

While the company would not comment on timing or pricing just yet, LG Electronics Australia marketing director, Gemma Lemieux told Appliance Retailer that the robot may eventually be sold through traditional retail channels, when asked whether it could appear on the floor of a big box retailer in Australia.

“If retailers think this is right for their consumer, yes, definitely. We’re going to have that relationship and you’ll still go through retail because it gives you that connected home. You can see how it talks to washing machines and ovens via the connected solution we offer,” she said.

“From a retailer perspective, it’s an attractive proposition because they can start to look at that as a whole solution for consumers. I would imagine that we would look at selling through retail and it’ll be interesting as to how we support it with customer service as well,” she added.

LG Home Appliance Solution Company president, Steve Baek further commented, “The LG CLOiD home robot is designed to naturally engage with and understand the humans it serves, providing an optimised level of household help. We will continue our relentless efforts to achieve our Zero Labor Home vision, making housework a thing of the past so that customers can spend more time on the things that really matter.”

The core of LG CLOiD is its ‘Physical AI’ technology, which combines data from images and video into structured data designed for machine learning analysis, before taking actions. The technology has two main components: a Vision Language Model (VLM) that converts images and video into structured, language-based understanding and Vision Language Action (VLA) which translates visual and verbal inputs into physical actions.

SOURCE: Appliance Retailer

Discover more from Nationwide Southwest

Subscribe now to keep reading and get access to the full archive.

Continue reading