K-Glass 3 provides a virtual keyboard→

EurekAlert:

Recently, gaze recognition was proposed for HMDs including K-Glass 2, but gaze is insufficient to realize a natural user interface (UI) and experience (UX), such as user’s gesture recognition, due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes.
As a solution, Professor Hoi-Jun Yoo and his team from the Electrical Engineering Department recently developed K-Glass 3 with a low-power natural UI and UX processor to enable convenient typing and screen pointing on HMDs with just bare hands. This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display.

Sounds like they’ve done some interesting work for image recognition there – I’m assuming it’s capable of ‘anchoring’ the keyboard(s) to something in the background, not just having it move around wildly as your head moves.
I like that they pointed out that gaze-tracking isn’t good for natural UI – that’s something I’ve always thought, whenever a science fiction book describes that. Really? You want to control a battlefield heads-up display with weird eye motions? That is, quite possibly, the most distracting way to navigate a menu.