IT feels like a billion years before the last plasma sprint, which was in 2019 in Valencia, before the pandemic, but finally this year we are back on track, and was great to see again many old friends as well as seeing many new faces for which it was the first sprint.
We were gracefully hosted by Tuxedo Computers in Augsburgh, makers of very nice laptops that come with Linux and KDE Plasma, as well as being KDE patrons.
First of all, everybody got up to speed with a full git build of a Plasma 6 session, so that everybody could participate in development and discussions from the same level.
There were many discussions about Plasma 6, about what we want to do in Plasma and in Kirigami, how we want to change the look and defaults for the new major release. Most of the user-facing changes have been wonderfully described by Nate.
On my part, I worked on mainly 2 things, that were fairly not “glamorous” but quite important never the less (and mildly painful to do) : a refactor of the plasmoid loading code and splitting all the Svg themes code to a new framework with far less dependencies, ideally usable by any application.
Plasma API
I spent most of my hacking time at the sprint on a refactor of the plasmoid loading code, which won’t be really “seen” by the user, but will make the infrastructure much more robust and the API cleaner.
The person which must pay attention to it is the plasmoid author, which will need to adapt the plasmoid code in a few places.
Most notable is that, just like when you are writing a QML application you have to use the ApplicationWindow root QML Item, for a plasmoid you now have to use a PlasmoidItem root object, so something like
Using the Plasma Svg code to support stylesheet recoloring, on disk image cache to speed up loading, and the 9-patches FrameSvg is something the several applications would be interested to, and some actually are already doing, but since plasma-framework has a lot of dependencies, for some applications that is a blocker. All the svg code has now been broken out into a new framework called KSvg, which is still work in progress, but in the end will support all existing plasma themes with no change, and if an application wishes to use it, the svg sets will be loaded from the app own data folder (or anywhere else the application configures it to) instead of the share/plasma/desktoptheme folder, where plasmashell looks for them (so they can also use a complete different theme structure and don’t have to provide the same elements)
As announced previously, Plasma 5.27 will have a significantly reworked multiscreen management, and we want to make sure this will be the best LTS Plasma release we had so far.
Of course, this doesn’t mean it will be perfect from day one, and your feedback is really important, as we want to fix any potential issue as fast as they get noticed.
As you know, for our issue tracking we use Bugzilla at this address. We have different products and components that are involved in the multiscreen management.
First, under New bug, chose the “plasma” category. Then there are 4 possible combinations of products and components, depending on the symptoms:
Possible problem
Product
Component
The output of the command kscreen-doctor -o looks wrong, such as:
The listed “priority” is not the one you set in systemsettings
Geometries look wrong
kscreen
common
Desktops or panels are on the wrong screen
There are black screens but is possible to move the cursor inside them
plasmashell
Multi Screen Support
Ordinary application windows appear on the wrong screen or get moved in unexpected screens when screens are connected/disconnected
Some screens are black and is not possible to move the mouse inside those, but they look enabled in the systemsettings displays module or in the output of the command kscreen-doctor -o
kwin
multi-screen
The systemsettings displays module shows settings that don’t match reality
The systemsettings displays module shows settings that don’t match the output of the command kscreen-doctor -o
systemsettings
kcm_kscreen
In order to have a good complete information on the affected system, its configuration, and the configuration of our multiscreen management, if you can, the following information would be needed:
Whether the problem happens in a Wayland or X11 session (or both)
A good description of the scenario: how many screens, whether is a laptop or desktop, when the problem happens (startup, connecting/disconnectiong, going out of sleep and things like that)
The output the terminal command: kscreen-doctor -o
The output of the terminal command: kscreen-console
The main plasma configuration file: ~/.config/plasma-org.kde.plasma.desktop-appletsrc
Those items of information already help a lot figuring out what problem is and where it resides.
Afterwards we still may ask for more informations, like an archive of the main screen config files that are the directory content of ~/.local/share/kscreen/ but normally, we wouldn’t need that.
One more word on kscreen-doctor and kscreen-console
Those 2 commands are very useful to understand what Plasma and the rest of the system thinks about every screen that’s connected and how they intend to treat them.
kscreen-doctor
Here is a typical output of the command kscreen-doctor - o:
Here we can see we have 3 outputs, one internal and two via DisplayPort, DP-4 is the primary (priority 1) followed by eDP-1 (internal) and DP-3 (those correcpond to the new reordering UI in the systemsettings screen module).
Important data points, also the screen geometries (in italic in the snippet) which tell their relative positions.
kscreen-console
This gives a bit more verbose information, here is a sample (copied here the data of a single screen, as the output is very long):
Important also the section EDID Info, to see if the screen has a good and unique EDID, as invalid Edids, especially in combination with DisplayPort is a known source or problems.
Ah, working with more than one screen, nice and convenient, but always has been the scourge of usability, due to broken hardware and software, on everyplatformyou canimagine. And yes, on Plasma we are notorious to have several problems with regard to multi screens, which spurred many bug reports.
TL;DR
For Plasma 5.27 we did a big refactor on how the screens are managed (and how they are mapped to desktops and panels) which hopefully should solve many of those issues, creating a much more predictable experience, both for “fixed” multi screen setups (like a setup at your desk) and “on the go” (like attaching a projector at a conference).
No more default empty desktops after connecting a screen that you already connected in the past
No more desktops, wallpapers, widgets and panels lost after restart
No more different sets of desktops between X11 and Wayland
Better experience when using USB-C based docks
UI wise on the Systemsettings module nothing will change in the most common case, when you have one or two screens, where you will be presented with the usual “primary” checkbox.
When you have 3 or more screens, instead of s simple “Primary” checkbox, you will be presented with a dialog where you can rearrange the screens in a logical order, so that you can say: “This is my screen number one, this is my screen number two etc.”
This will map exactly to what “screen number one, two etc” are for the Plasma shell, so that whatever desktops, wallpapers, widgets and panels are on the “screen number one” will always be on the “screen number one”, never disappearing again, same for each screen, giving a completely predictable, and stable multiscreen setup.
For 5.27 I worked mostly on the Plasmashell part, but many thanks and shouts to Ivan for his work on KScreen and to Xaver for his work on the KWin and Wayland protocol parts.
Long story: the moving parts
I would like to do a semi-technical but still quite high level description on how our multiscreen infrastructure works, and what did we change.
When you have multiple monitors connected, assuming the kernel and drivers will do the right thing (which… doesn’t always happen), either X11 or Wayland (so in our case the KWin Wayland compositor) will see the screens and have info about them we can enumerate and manage.
KScreen
In Plasma we have a daemon/library to manage the screens, called KScreen. It will save a configuration for each combination of screens it encountered. each screen is identified uniquely by its EDID, and when this fails for the above reasons, the connector name the screen is connected to (under X or libdrm they will have names like HDMI-A-1, DP-2, eDp1 and so forth). connector names are also not optimal, but is the best that can be done in case of a bad monitor (For 5.27 Xaver fixed a quite big issue on this fallback path which likely caused several of the reported bugs).
The user then can chose with the Systemsetting module how the screens are arranged, their resolutions, which is the primary and what not. When a screen is connected or disconnected, KScreen will search for an existing configuration for this set of screens and restore that one.
Plasma
The Plasma shell also has to be aware of multiple screens arrangement, because you want the desktops with their wallpapers, widgets and icons to be on the screen you expect it to, as well all the panels to be always on the “proper” screen you expect it to. And that’s where we had a lot of problems in the past, and various approaches have been attempted.
Current
Currently, up to 5.26 Plasma relied on two concepts for working out which screen a desktop and panel should be. Combining a single primary monitor and using screen connectors names for disambiguate second and third screens. In theory …
There are a couple of things that can go wrong with this: between X11 and Wayland, connector names tend to be different (for instance HDMI-1 vs HDMI-A-1 on my system).
Another problem is related to the the USB-C based docks that have video outputs (such as the Steamdeck one, but is common to any other model and brand). They dynamically create and delete the outputs and connectors when they are plugged and unplugged, but the connector name is not stable between different plugs/unplugs: sometimes after re-plugging the old names are not recycled, but increased (so for instance if those were DP-2 and DP-3, they may become all the sudden DP-4 and DP-5). Since we used to store those names, we have a problem: each time a new connector name is encountered is treated as a completely new thing, giving its new empty default desktop and no panels, causing many wtf moments.
Plasma Containments (technical name of the desktops and panels) always had a screen property associated to them which is just an integer: 0 for the primary screen and then 1,2,etc for the others.
5.27
The new approach builds on a single design, extending the primary monitor concept to multiple monitors, which cleans up the codepaths even for the two monitor case.
So we now dropped this association map between screen number integers and connector names, and we go with this ordered list all the way down, so now Plasma, KWin and KScreen agree exactly who “Screen 1” is.
We have a new Wayland protocol for screen ordering (as well an X11 counterpart) for KScreen and KWin to agree to which one is the real logical ordering of the screens, so Plasmashell will now follow this without attempting to identify monitors by itself, removing a very big failure point.
As said, When your system has only 2 screens, in the Systemsettings module you’ll see exactly the UI it had: just a checkbox to select which one is the “Primary”, so no surprises here. When you have 3 screens or more then you are presented with a dialog where you can visually reorder the screens, to decide which is the first, which is the second etc.
In this way, you’ll always be sure that whichever screen was set as “first” will have your “main” set of panels and desktop widgets, that you will never lose, also whichever you set as “second” will always have the desktop and panels you set it and never change and so on.
Plasma has been designed from the get go (2006 or so.. it seems at least 2 eternities ago 🙂 ) to not make any assumptions on the type of device and to do a clear separation between the core technology/runtime and the various GUI plugins that end up implementing a full desktop experience.
In an architecture decision informed by previous prototypes we did in KDE4 times for mobile devices UIs, in Plasma 5 we split it further and introduced the concept of a “shell package” which lets further customization between devices than what Plasma in KDE4 times allowed.
Because of that we could do the Plasma Mobile shell without changes to the architecture that runs both the Desktop shell and the mobile version, despite being a completely different UI.
While working on different shells such as Plasma mobile and the Mycroft voice assistant (more on that later) we noticed that the best possible help for building a shell for a new type of device from the ground up is a very minimal shell that doesn’t make many assumptions about the final gui, but just provides few building blocks that will be in the end customized by the developer.
Enter Plasma Nano. Plasma Nano is a minimal shell not intended for end users which the developer will extend to match the final desired user experience (think about subclassing in object oriented languages)
The normal television set is becoming more and more from just a “dumb monitor” to a full featured computer system. But for several factors its UI must be completely different from both a desktop system, and from a mobile system.
Those so called 10 foot user interfaces have some particular constraints. The screen will always be seen from pretty far (altough you can’t be sure exactly the distance the users will look at it) so every graphical element must be very clear and big enough to be seen from a distance (indeed ~10 feet / 3 meters or so) so very big text, clear and spaced (too high information density should be avoided)
Every control should be accessible with the least number possible of buttons. It should be easily controllable with just the remote control, and in particular with just arrow keys, and “ok” button and one to go back. Often tv remote controls have a plethora of buttons, tough which makes the user experience only more confusing.
Another important way of interaction recently in smart tvs to go around the lack of complex input methods is voice control. It is ideal there as the commands to the TV would be very basic like “watch this series on Netflix” and things like that. This of course is giving birth to a slew of very legitimate privacy concerns, but this technology is too good and convenient to throw it away because the current implementations are done by less than honest corporate giants.
Mycroft is an open source project which is aiming to build a fully opensource voice-based personal assistant with a growing number of skills.
Understanding voice is in two independent parts: speech to text (going from the sound file of the recorded voice to a normal string of letters) and actual semantic understanding of the sentence to then translate it to a concrete action (fetching weather data, a Youtube video and so on). The latter part is the bulk of the work for a personal assistant and is what Mycroft implements. it can then use an external service for speech to text.
It can use a variety of services, from Google (speech to text, not full assistant) to a different number of free and proprietary services. In the end, they want to use the Mozilla Deepspeech project, which would make the stack 100% free. You can already configure it to use deep speech or other engines, even if not fully ready yet (want to help out? Mozilla is looking for voice samples to make their product better and ready for mass consumption!)
A voice assistant looks better with a GUI part as well, as the echo show and google assistant demonstrate.
Some of us worked together with the Mycroft people to produce an extensive set of QML bindings for the Mycroft system, (that will provide as wellthe user interface for future Mycroft based smart speakers). They’re a third-party QML module that can be freely used by any Qt-based applciation for voice integration.
Plasma Bigscreen can optionally use those QML bindings to provide parts of the QML UI of the user experience (so yes you can ask via voice to your TV if it’s going to rain tomorrow or to watch the music video of so-and-so on YouTube)
Those QML bindings can provide from Mycroft a variety of GUI features, from simple notifications like a clock if you ask what time it is, to a full featured interactive app, like we did for the Youtube skill, which is a Youtube browser app which can be used both via the voice only, the remote control only, or a combination of both and provides a footprint for future rich voice-interactive user interfaces.
From January 31st to February 8th I went on a little tour, first at the two days of Fosdem in Brussels, then to Berlin for a KDE sprint about Plasma Mobile.
It was the first time i went to Fodem: it’s an awesome experience, even tough big and messy: which is the awesome of it… and the bad of it at the same time 🙂
Even tough there were 800 talks I didn’t attend that many, some about the Elixir language, some about retrocomputing, some about iot stuff. At Fosdem the best thing to do there.. is meeting a lot of interesting people, rather than attending talks, which are very interesting never the less, which you can find videos here.
I stayed most of the time at the KDE booth. in there we had as usual some hardware to show: besides the usual occasional “normal” laptops running Plasma, we had a PineBook pro running Plasma on Manjaro which runs impressively well for such a low resource machine and was in fact a favorite among the visitors.
Besides that we had some Pinephones running Plasma Mobile, which they were very popular as well. The Plasma Mobile shell was very stable compared to last year fosdem which made me quite happy.
Looking people using it for the first time, was also a very precious feedback on what are the UI problems new users can encounter.
Feedback that turned out to be very useful the next week, when together other people working on Plasma Mobile and two perople from the UBports project few to Berlin for a Plasma Mobile sprint, graciously hosted by KDAB in their spaces.
There will be many points we will be able to collaborate with UBPorts, starting from background technology such as their telephony services, the content sharing infrastructure, and maybe push notifications. Tough in the end, we want their apps running smoothly on Plasma Mobile and our apps running smoothly on UBports as well, to have as many apps as possible.
Personally, the areas that i worked more at the sprint were the Plasma Mobile homescreen and fixes in Kirigami that were needed for some plasma mobile apps to look better.
On the Plasma Mobile homescreen, now dragging plasmoid thumbnails from the widget explorer to the homescreen works properly (fix in KWin, also drag and drop icons in your desktop wayland session should be fine now)
In order to make the homescreen look a bit more “modern” and less busy, I also removed most of the gray background rectangles that were there and in general simplified the homescreen code a bit, here is the result:
In general, it seems to me that Plasma Mobile is actually really start coming together 🙂
From the 21st to 24st of Novemember, a bunch of KDE people gathered in Berlin graciously hosted by the MBition offices to discuss about the next big iteration of the KDE frameworks.
Work on Qt6 started, and it will be a big refactor that makes the api quite better, solve some architectural problems in some Qt5 areas (one of my personal favorites is the new QGuiAction class coming out of the split of QAction in a QWidget-less implementation). In order to have that, it needs to be binary incompatible with Qt5 tough.
As it will be incompatible, we need to adapt our software and our frameworks.. a lot of work ahead, but big opportunities as well, so… It’s time for KF6, where we can do the same thing: polish our API and solve some problems we couldn’t do in KF5 in a binary compatible way.
We worked in groups, assisted remotely by David Faure and looked at the frameworks we were less happy and need more a refactor.. The first obvious candidate are those tier 3.
The Tier system in frameworks means that frameworks of tier 1 don’t depend from other things than Qt modules and base system libraries. Tier 2 frameworks can also depend from tier 1 frameworks, and tier 3 from tier 2.
In reality there are some tier 3 frameworks which “real” tier would be 5 or 6 as may depend from multiple other tier 3 frameworks. Those of course are the first that need to be looked at. Ideally we should manage to lower the tier of frameworks as much as possible and at least having tier 3 ones that are actually tier 3, and not 4 or 5.
Some typical examples are KIO, KDeclarative, KXmlGui, and Plasma-framework.
For the KDeclarative case, the reason is that its genesis was quite peculiar. When in late KDE4 – early KF5 was starting to be apparent that the focus of the future in GUI programming in Qt was probably going to shift towards QML, we chosen at first to put the bindings to QML of the frameworks we needed in a single umbrella repository, ending up with a git repo full of many tiny QML plugins that depended from one framework or another… ending up with a framework that depended from just about all the others. It will be split out and every useful QML binding will go into the proper framework: as QML is now a super central part of Qt, frameworks need to play well with it to be an important citizen of the Qt ecosystem.
Plasma framework
Now talking about a part that is really near to my hearth: Plasma-framework. It’s a pretty high tier because it depends from KXmlGui and KDeclarative. As we seen, both of those dependencies will be not too hard to remove (famous last words :).
My plan is to have the main plasma library splitted in 3 smaller frameworks, each one tier2 maximum:
libplasma: the part that manages load and save of your desktop layout, all that is related of loading a plasmoid and the api that palsmoids will use to interact with the Plasma workspace
theming: Plasma uses svg-based themes with significant optimizations like disk-caching of the rendered bits, which support stylesheets for dynamic colors based on your system ones: this should be a framework in itself, usable by any app: probably tier 2
dataengines: that’s a technology not much used anymore and is kinda being phased out. Should exist standalone as a “porting aid”
With this, hopefully Plasma will be even leaner, further improving startup time and memory usage, while on the same time applications gains a framework for doing light weight and feature rich svg based graphics theming.
Kirigami used to have a Telegram channel as its main communication channel. this is of course not optimal being a closed service and many potential contributors not having an account on Telegram.
Since today, we also have an IRC channel #kde-kirigami on freenode and #kirigami:matrix.org on Matrix
The Telegram channel is still there, and all 3 are bridged between each other, so a message sent by any of the 3 platforms will be received also by users on the other two.
See you there 🙂
The time for Akademy came this year as well, this year it was in the gorgeous Vienna, Austria.
This year marks my 10th Akademy in a row, starting from my first one in Belgium in 2008.
Talks have been awesome as usual, but what’s always awesome for me year by year is all the face to face conversation with so much diverse and smart people in out awesome KDE community.
Kirigami
For me the highlight was the BOF session on Kirigami, in which some nice plans, together the VDG are starting to form.
Kirigami in a QML based UI framework at the core of some KDE applications, which will become more and more central as more and more QML based applications are made.
So far is still a relatively unknown gem in the KE software and frameworks offering, however as technologically is starting to mature, we’ll start to advertise it more and simplify onboarding.
A big part of that will be about web presence and documentation:
A nice media-heavy introductory website which will showcase the features it can offer to your app, together expanded sections of the central Kirigami UX patterns in the new Human Interface Guidelines website.
A series of tutorials how to get started developing applications using the Kirigami toolkit
Repurpose the example “Kirigami Gallery” application: It will become a showcase of components and UI patterns the developer is recommended to use: each gallery page will also have documentation text together links to the corresponding HIG page and to the gallery page sourcecode itself, to be used as a source of inpiration and best practiches to be used while developing your application
Kirigami Gallery on the Cards pattern, mobile version
Kirigami Gallery on the Cards pattern, desktop version
If you think you can help on this web presence effort, you are welcome to join 🙂
Plasma
On the Plasma side, many plans of improvement have been discussed and are on their ways, such as better support for touch-based convertible laptops, a completely rewritten and overhauled notification system, and improved Virtual Desktops/Activities infrastructure and UI, on Wayland too.
But, more on all of this in the future 🙂
Vienna
Vienna is a really charming and beautiful city, I would totally recommend going there at least once.
It’s home not only to great musician in the past:
Mozart
But also to Important scientists that contributed so much to the knowledge of humanity and.. contributed a littel bit making possible all the technology we know and love 🙂
Yesterday I did lead my first 30 meters pitch (Girotondo, 5c in Finale Ligure) that’s quite another level of scary compared with the ~15 meters stuff I was used to, and quite literally “pushing your comfort zone”.
When you see the last quickdraw you placed 3 meters below you (and the ground 20-something) and the next bolt just a couple of moves away.. just a couple too much, all fibers in your body tell you “give up”, fingers are starting to slip, they want to slip.
But you really rather not want to give up, because again, the last quickdraw you placed is 3 meters below you, so the fall will be very long (and scary, and potentially painful), and because what the hell, you came all the way up there, just to give up?
You just say fuck pain, fuck the fear, and just keep moving forward, one slow move at a time, just try to clip a quickdraw to one last bolt, and then I’ll maybe give up there, after all even if I fail, yes the fall will be long and scary and potentially painful, but at least I know I failed trying. All the safety precautions were taken (is a surprisingly safe thing to do, once you go manically through the checklist, it’s a *very* security conscious community) so while accidents do happen, usually the “worst thing that can happen” is “not very much”. In that controlled environment you learn to replace pretty quickly the “should do this”, “should have done that when I could” with the “just go for it”.
This post wanders quite off topic… but has a lot of pretty pictures indeed 😉
Sometimes I ask myself why I am a software developer, the answer in the end is that I always enjoyed creating things, whatever it is going from being sketched out totally in my imagination to finally seeing it in the flesh. Writing software, especially graphical software can be very satisfying exactly because of this mental process of seeing the thing you thought about slowly forming and starting to actually working, with the gap between the mental image and the real thing narrowing more and more (yeah, I’m one of those heavy visual thinkers that can think almost exclusively by images).
But sometimes nothing can replace the satisfaction to create an actual, beautiful object, and I also feel manual skills are something that we should cultivate much more, I feel more complete if every now and then I do something that a) I don’t know anything about when I start and b) it’s a difficult manual skill to craft.
That’s also why before drawing the svg for the Kirigami banner
I “had” to make some experiments of an actual kirigami…
But this post is not about that.
Almost a year ago, a friend of mine told me that he wanted to learn a bit to hack on some simple Arduino stuff, and you know what? I wanted too.
I have this stubborn quality that makes me to go quite overboard when I decide to do something (especially if is not for myself) and not to stop until is done, actually useful and pretty, so if we are doing some Arduino project, let’s do something that has an use and that will be pretty… and that’s how the project “Bagnur” started (means watering can in Piedmontese language).
Of course it needed a logo :p
The project is one that is seen again and again on the interwebs, so is just a remix of existing ideas rather than something truly innovative: the Arduino takes data from a soil moisture sensor (different humidity in the soil changes current conductivity), to figure out how much water the soil of a potted plant has.
When is dry enough it opens a solenoid based valve to pour enough water in, and the moisture sampling goes on and on, hopefully stopping you from killing those poor potted plants from thirst after forgetting watering them for days 😉
The final hand made steampunk-like package
The moisture sensor is based on the LM393 chip, the solenoid valve is similar to this one (hilariously overspec for this project, love this kind of overkill).
So we know the Arduino output pins have a too weak current to operate the valve, it will have to be powered separatedly: the Arduino will close its circuit with a transistor, I had just salvaged an E13007 NPN transistor that had useful charateristics (low base-emitter saturation voltage, resists quite heavy loads) from a broken ATX power supply.
This makes things a bit more interesting, luckily in the end a single power source was enough to power the Arduino and the valve in parallel, probably not particularly recomended, but cheap and compact (having 2 different power bricks for such a sillyness wouldn’t have been particularly fun for day to day use).
To make the project a bit more interesting, we have a potentiometer that will regulate how much water the plant needs, different plants, different needs and one of those pretty RGB LEDs, that will be the “output UI” of the thing.
The state of the led will be:
Fading from pure red (soil bone dry) to pure green (soil soaking wet) with all the values in between
Fading to blue when the valve is open, and the plant is being watered
The LED will stay usually at a very low power, the sensor will do a reading every 10 seconds or so, when this happens the LED will fade to full power with a nice animation
For the final thing, I decided to use an Arduino Nano compatible board, based as well on the ATMega 328 chip, since I don’t need many pins or performance, it just has to be as small, low power and cheap as possible.
In this Github repo there are both the source code of the program running in the arduino and a couple of schematics drawn with Fritzing, so it should be of easy replication and improvement… if someone really wants to 😉
The final board will be in an hand-made case made of wood that gives it a cool almost “steampunk” look: that was honestly for me the most fun part of the project of all, probably because was the farthest from what I usually do, the most low-tech, hand skill demanding part (and slightly dangerous… yep, it involves rotating blades :p).
Video of the thing in action
This videos shows some assembling steps and the thing in action, both opening the valve automatically based on the moisture sensor values and manually with the button on top.
Board
Let’s start from the usual prototyping with the classic Arduino Uno/breadboard combination (here still with different colored leds instead of the final single RGB one)
After the breadboard prototype is done, let’s start to lay out the components on a perfboard, to make it look more like a final prototype, usable day to day
First components attached to the perfboard. The Arduino Nano will connect it as a daughter board.
Yep!, i definitely need to improve my soldering skills, but I swear, even if is kinda ugly, it works like a charm 😉
Still some components missing, already attached to the wooden base
Woodwork
As I mentioned, for me the most interesting part was to build a wooden case from scratch, from a raw plank of wood, trying to master some of the classic woodwork tecniques.
The plank is made thinner and smoother via a thickness planer; after this step the setup of the machine changes becoming a surface planer for a more precise and smoother retouching. The process repeats until the wood reaches the desired thickness.
Did i mention, ROTATING BLADES?
All the sides of the box are done
After the box has been glued, it has to be tied very, very tightly as the glue dries
The joints are not very precise …yet
…But some aggressive sandpapering for sure helps, best if done with a Sander
The lid of the box with a translucent, water proof paint, attached to the top are the moisture sensor, the RGB LED and a button to manually open the valve.
The complete device, ready to be closed. The wires on the left control the RGB LED, the blue and green wires in the middle control the button, the wires on the right control the moisture sensor.
The completed little, mean machine.
It has been an insanely fun project, and I am sure my technique can still improve in all areas (designing an electronics board, soldering, woodwork…). If I’ll keep trying to improve with new projects I don’t know yet, but the recomendation I can make is get out of the comfort zone of your day-to-day work and experiment: you have only to gain, if only for the act of doing things wrong, without which there is no learning.