Tag Archives: bigscreen

Behind Plasma Bigscreen

BlaBlaSoftware

Plasma has been designed from the get go (2006 or so.. it seems at least 2 eternities ago 🙂 ) to not make any assumptions on the type of device and to do a clear separation between the core technology/runtime and the various GUI plugins that end up implementing a full desktop experience.

In an architecture decision informed by previous prototypes we did in KDE4 times for mobile devices UIs, in Plasma 5 we split it further and introduced the concept of a “shell package” which lets further customization between devices than what Plasma in KDE4 times allowed.

Because of that we could do the Plasma Mobile shell without changes to the architecture that runs both the Desktop shell and the mobile version, despite being a completely different UI.

While working on different shells such as Plasma mobile and the Mycroft voice assistant (more on that later) we noticed that the best possible help for building a shell for a new type of device from the ground up is a very minimal shell that doesn’t make many assumptions about the final gui, but just provides few building blocks that will be in the end customized by the developer.

Enter Plasma Nano. Plasma Nano is a minimal shell not intended for end users which the developer will extend to match the final desired user experience (think about subclassing in object oriented languages)

The normal television set is becoming more and more from just a “dumb monitor” to a full featured computer system. But for several factors its UI must be completely different from both a desktop system, and from a mobile system.

Those so called 10 foot user interfaces have some particular constraints. The screen will always be seen from pretty far (altough you can’t be sure exactly the distance the users will look at it) so every graphical element must be very clear and big enough to be seen from a distance (indeed ~10 feet / 3 meters or so) so very big text, clear and spaced (too high information density should be avoided)

Every control should be accessible with the least number possible of buttons. It should be easily controllable with just the remote control, and in particular with just arrow keys, and “ok” button and one to go back. Often tv remote controls have a plethora of buttons, tough which makes the user experience only more confusing.

Another important way of interaction recently in smart tvs to go around the lack of complex input methods is voice control. It is ideal there as the commands to the TV would be very basic like “watch this series on Netflix” and things like that. This of course is giving birth to a slew of very legitimate privacy concerns, but this technology is too good and convenient to throw it away because the current implementations are done by less than honest corporate giants.

Mycroft is an open source project which is aiming to build a fully opensource voice-based personal assistant with a growing number of skills.

Understanding voice is in two independent parts: speech to text (going from the sound file of the recorded voice to a normal string of letters) and actual semantic understanding of the sentence to then translate it to a concrete action (fetching weather data, a Youtube video and so on). The latter part is the bulk of the work for a personal assistant and is what Mycroft implements. it can then use an external service for speech to text.

It can use a variety of services, from Google (speech to text, not full assistant) to a different number of free and proprietary services. In the end, they want to use the Mozilla Deepspeech project, which would make the stack 100% free. You can already configure it to use deep speech or other engines, even if not fully ready yet (want to help out? Mozilla is looking for voice samples to make their product better and ready for mass consumption!)

A voice assistant looks better with a GUI part as well, as the echo show and google assistant demonstrate.

Some of us worked together with the Mycroft people to produce an extensive set of QML bindings for the Mycroft system, (that will provide as wellthe user interface for future Mycroft based smart speakers). They’re a third-party QML module that can be freely used by any Qt-based applciation for voice integration.

Plasma Bigscreen can optionally use those QML bindings to provide parts of the QML UI of the user experience (so yes you can ask via voice to your TV if it’s going to rain tomorrow or to watch the music video of so-and-so on YouTube)

Those QML bindings can provide from Mycroft a variety of GUI features, from simple notifications like a clock if you ask what time it is, to a full featured interactive app, like we did for the Youtube skill, which is a Youtube browser app which can be used both via the voice only, the remote control only, or a combination of both and provides a footprint for future rich voice-interactive user interfaces.

Plasma Bigscreen

Software

Today I want to introduce a project I have been working on together (mostly in the background) with some colleagues of mine… Now with beta status reached, it’s time to more publicly talk about it: enter Plasma Bigscreen.

Smart TVs are becoming more and more complete computers, but unfortunately there the experience tends to be a tight walled garden between proprietary platform, services and privacy-infringing features. Features which are very cool, like voice control, but in order to not pose a threat to the user privacy should be on a free software stack and depending less on proprietary cloud platforms where possible.

Plasma BigScreen is a platform intended to use on smart TVs (trough a powerful enough small computing platform, such as the Raspberry Pi4, or any tiny computer if you need more power) with big remote-friendly UI controls, and Voice activation. What technology did we use for it? Plasma (of course!) and Mycroft.

Plasma Bigscreen main user interface

Plasma Bigscreen is an user interface for TVs providing a 10 foot UI for a smart TV experience (on TVs or any screen with an HDMI connection). It is a Plasma Shell, just like our beloved Plasma Desktop and Plasma Mobile.

It uses the familiar look and feel of Plasma but optimizes all the user interaction to be perfectly readable from the distance and controllable with ha simple basic remote control… from your couch.

Besides interaction with a remote control, it also supports a modern way of iteracting that is revolutionizing the UX on all kind of devices: voice interaction. Optionally, it can integrate seamlessy with Mycroft: an opensource project aimed to offer a completely free voice assistant.

Mycroft offers what are the so-called “skills”. Each skill, takes care of a particular voice interaction. there is an huge variety available, from the weather, do date/time and reminders, to online services clients such as soundcloud and Youtube, with comprehensive QML bindings to show a rich User Interface in a Plasma Environment.

We prepared a beta image for the Raspberry Pi4 for anyone to try, in which we pre-configured Plasma Bigscreen with Mycroft integration ready to go.

If you have a Raspberry Pi4, you can try this image immediately: you can find it here: just flash the file on a MicroSD and is ready to go on any Pi4.

Note that on that image, we point the device to the official Mycroft company’s “Mycroft Home” service, which internally uses the Google STT (Speech to text) which does use google but in a more anonymized fashion as this is not tied to your Google account (nor requires having one). Tough not ideal, it’s possible to configure the Mycroft core to use different services, even those you can self-host like Mozilla Deepspeech.

In here to emphasize voice controls, we prepared some skills (some preinstalled, some can come from the kde store) to fully show voice controls. In particular a Youtube client skill which is perfectly usable both from a remote control and voice only.

Youtube skill shipped in the image

As remote control, you can use the remote of your TV directly if it supports the CEC standard, or any of those remotes with an usb dongle which are “seen” as a keyboard with just few standard keys (some have also an integrated microphone, so giving voice commands is very easy).

In the next days, there will be more posts explaining the architecture and the technical choices we made, in the mean time, have a try and if you want to get in contact with us about the project, you can find directions here.