La fonera, that fully tivo-ified (as rms would say :)) wifi accesspoint by fon was hacked two (now three:)) times, and it has always been patched very quickly.
The last one that was discovered here with a nice tutorial here,
was fixed on the 0.7.1 version of their firmware, but there is still a very similar hole in the webform still about unescaped evil characters…
Just replace “/usr/sbin/iptables -I INPUT 1 -p tcp –dport 22 -j ACCEPT” and “/etc/init.d/dropbear” in step1.html and step2.html with “$(/usr/sbin/iptables -I INPUT 1 -p tcp –dport 22 -j ACCEPT)” and “$(/etc/init.d/dropbear)”
Once done this follow the instructions of the last method straightforward…
Now, it will be surely fixed in the next version and you know what? I hope that it will be fixed, because it’s a very serious security problem, but it’s very sad that everything it’s becoming more and more broken by design, so pleeeeease fon, open that ssh by default and we will all looove you 🙂
Category Archives: BlaBla
Wireless cat
Playing with a phone: Nokia 6630 and Linux
I’m playing with my new toy: it’s a Nokia 6630 (I know it’s quite old, but I wanted the s60 phone with more affordable price I could find :P).
Of course I still think the perfect Uber-Geek telephone of my dreams is this, but I fear such devices will remain to be low availability geeky devices unless the various linux phone manufactures will agree to some standard and interoperable platform…
By the way Symbian is truly a very nice little OS, the only pity It’s that it’s not so open, BTW Nokia offers a quite decent SDK, that supports only windows but is based on gcc, cygwin and Perl, I find this a true delirium, but is there some way to set up a working SDK also on Linux/OSX.
So I hope to be able to write something useful for it in the near future, maybe a tool for making a SMS backup without the uberevil PC-Suite (see below) and some themes, because all the futile things are the most important ones 🙂 (and the nice thing is that both the apps and the themes are compatible with all S60v2 based phones, so stay tuned even if you have a model slightly different).
Getting it to play nice with Linux
The most important thing I want from it of course making it to play nicely with Linux, if only for the ugliness of the Nokia tool (PC-Suite, of course Windows only) that is an obtrusive, slow, buggy and bad looking thing 🙂
In this section I will point to some very useful how-tos I found around here
So the things I would like to be able to do with it are:
- Synchronizing addressbook, calendar and notes (works)
- Transfer files with OBEX (works)
- Reading/synching SMSes (still didn’t manage it to work)
- Using the telephone as GPRS modem (works)
- More perverse, using the computer as a modem for the telephone (works)
Of course all these things should work on both bluetooth and with the more convenient USB DKU-2 cable bundled with it.
Available tools
The tools I have tried in my journey are:
- KMobileTools
- It’s a really nice piece of work (of course the most decent UI on the whole lot :D), but unfortunately at the moment isn’t of much help because at the moment it only supports the old AT protocol that is very crippled on Symbian devices (there is an early Gammu support but unfortunately also Gammu isn’t of much help). All you can do is to see the battery and the signal levels and make outgoing calls.
- Gnokii
- It’s a tool specific for Nokia phones that (on S60 devices) features a “server” application called Gnapplet that runs on the phone. It should allow to access the addressbook and the SMS archive from the PC. I managed to access the addressbook, but when I try to download a SMS Gnapplet crashes (It also have been reported to crash on Nokia 6600).
- Gammu
- Gammu is a fork of Gnokii, but as KMobileTools I only managed to access the battery and signal levels with it.
- OpenObex
- It’s a tool to perform OBEX filetransfers between the phone and the PC, and it works flawlessly.
- OpenSync
- It’s a relatively new tool (and IMHO the most promising one) to synchronize the addressbook, the notes and the calendar between the phone and the PC, it supports various backends, from simple plaintext ones to integration within Kontact and Evolution. The installation was a little bit tricky but it works quite well.
- Gnubox
- It basically forces the phone to use its Tcp/ip stack over bluetooth. It installs into the phone and lets you share the internet connection of the PC with the phone.
Synchronizing addressbook, calendar and notes
A very complete how-to on configuring OpenSync can be found at http://blog.dukanovic.com/?p=5. As I said the installation was a little bit tricky because there are needed the most recent version of the tools downloaded form SVN. Only a little note: when he says to download wbxml2 version 0.9.0 and patch it, please do it! Even if the 0.9.2 version has integrated that patch, it doesn’t work (at least with that phone).
Transfer files with OBEX
The how-to can be found at http://wiki.splitbrain.org/nokia_6630. Once you have gotten Openobex working, you can access it via the command line based obexftp client or with an ugly Tcl/Tk interface called ObexTool, there is also a KDE kio-slave, but it doesn’t support an usb connection and I didn’t manage it to work neither on bluetooth. A most promising method is a fuse based filesystem called obexfs, but at the moment also it doesn’t seem to work very well.
Getting obexftp and obextool to work with bluetooth is more straightforward, in order to get them working with the USB cable is necessary to use a very recent version of openobex and obexftp and obextool needs to be patched.
Reading/synching SMSes
Unfortunately I still hadn’t managed it to work. AFAIK the only tool that can be used to access the SMSes is Gnokii (Wammu doesn’t seems to work), but as i said it accesses correctly the address book but Gnapplet keeps crashing when it downloads a SMS (I will seek in the gnapplet source code if I can find the cause but the code seems very cryptinc to me 🙁 ). A basic gnokii setup is also covered on http://wiki.splitbrain.org/nokia_6630.
Using the telephone as GPRS modem
The how-to can be found at http://bitubique.com/content/view/26/42/. I still hadn’t tried it too much because it costs a butt-load of cash 🙂
More perverse, using the computer as a modem for the telephone
Because accessing the internet with the GPRS connection of your phone is soooo costly you may be interested to connect the phone to the internet sharing the connection of the PC. a good how-to can be found at http://www.rlachenal.com/bluetooth-6600-linux/. It’s for the Nokia 6600 but the procedure it’s nearly the same, there are only few minor things to account:
- Download the right version of gnubox specific for the 6630 (gnubox_6630_80_81.sis)
- In gnubox configuration set “2Box Bluetooth” to “Lan Access Server” (instead if you use in under Windows you must set it to “serial”. A good How-to for Windows xp can be found at http://web.singnet.com.sg/~kinston/Bluetooth%20Internet.htm)
- After you have started dund and before actually trying to connect your phone to the internet you must go in the gnubox menu to debug->”bring up IF”
- For Nokia phones it works only with Bluetooth, only for Sony Ericsson it can be used also over usb cable
- Make sure that the shell script used here uses the right interface names, i.e. you create the interface from your PC to the internet (ppp0) before creating the interface from the phone to the PC (ppp1), otherwise you can change the interface names in that script. If you are connected to the internet behind an ethernet adsl router you may want to substitute “ppp0” with eth0.
- the built in applications seems not to support the bluetooth provider, you must install 3rd party apps and configure them accordingly. On gnubox site there is a screenshot showing the configuration for Opera for S60
- When the bluetooth connection is lost it stops working and it for some reasons can’t make a new one. The only way to fix it seems to reboot the phone (LOL :D)
oh, fucking spammers
It seems I have to implement a captcha in this broken website 🙁
Dear spammers, I wish you all a painful and slow death or at least may goatse.cx be permanently burned on your retina.
Infected!
University!
Now the university is (for the moment) finished!!!
I graduated today, even if it’s only the first grade (3 years made in 5) I’m feeling rather less depressed.
Now I will have to do at least two more years of specialization, so I guess I will have all the time of being depressed again :-).
But for the moment: WOHAAAAAAAAAAAAAAAAAAA!!!!
PHP, cms, wikis and Web 2.0, and ONE to rule ’em all
As I promised here it is a long and grammatically wrong rant about what I think about the status of PHP and of the web application in general, starting from the tools I used for my thesis (PHP5, PostgreSQL), through a brief description of some today php cms and wikis and ending considering what i think it’s still missing.
PHP5 rocks
Since that beastie will run on a dedicated server there was no need to support the crappy hosting services with only PHP4/MySql4 service. With PHP5 it is possible to use true object oriented probramming without being deadly slow and using all that must-have things like public/private, inheritance and polymorphism, so you can shut up all that annoying Java programmers when they laugh at you :-P. Unfortunately, I’m absolutely sure that web hosting services won’t offer PHP5 service until 2050 or so, partly because a big slice of PHP programs are written so badly that they pitifully fails to run under PHP5.
PostgreSQL rocks too
The system had to support complex hierarchies (simple restrictions-free ontologies, like RDF, I leave owl-like restrictions to my heirs :)); Implementing hierarchies whit that crappy db engine that begins with “M” that does not support triggers and the default engine doesn’t even have foreign keys, Au contraire PostgreSQL it’s almost perfect, so as the other best things nobody cares of it and everybody goes with the cool and trendy MySql, being forced to move all of the integrity checks into the PHP code. Maybe I will give to Mysql a second opportunity with the promising version 5, but I still haven’t tried it.
PhpWet sucks
My poor cms that runs this site sucks badly. I realized that when I had to migrate the site from www.fosk.it to www.notmart.org. Changing some pathnames and adding some sections was very painfully, and anyway: what I was smoking when I decided to write my own broken themplate engine when so cool template engines exists already?.
But anyway it is still the only cms I can use without going mad because 1) every system has his very own idiosyncrasies an I know the mines, but 2) more objectively is the only one that supports all of some things that I consider irrenunciable. In particular:
- all articles must go in a well ordered hierarchy (sorry, but I think everything too much in hierarchies :), I will explain why)
- multilingual support (translate also the articles, non only the interface)
- and last it must be easy to add more complex objects as article types, for example not only articles with title and content, but things with more fields that are required for other applications (for example photo, description, price and quantity available for an item in an e-shop site).
And what about other cms, wiki and journals?
Every cms I’ve tried fails something and excels some other things (or wikis, or journals, it doesn’t matter: that distinction should be made for the final website, not for the engine, otherwise it only justifies a lack of flexibility), in particular:
- WordPress
- Only a journal, the world is not a blog and even if it can run some other kinds of websites and it is getting better it is still not ready, but probably it will be able to handle more generic content in the near future.
- Mambo
- After trying it for a job that it has not been done, I realized I hate it, because it’s complex, it produces crappy HTML, supports translations only with a third-party add-on, his administration panel uses an ugly javascript menubar and it is difficult to exit from the “list of latest news” scheme and doing a more ordered hierarchy (oh no! again with that obsession :))
- Drupal
- This big beast is rather interesting and definitely I will learn more about it. Reassuming roughly the things I like about it, here it is:
- The HTML it produces is fairly better than the Mambo one.
- It supports a nice taxonomy, but I think the nodes should have been forced to be always comprehensible names instead of numerical ids that can be renamed later as strings (i.e. nobody will ever do it:).
- It uses clean URIS with a good use of Apache mod-rewrite: www.mysite.org/category/subcategory is fairly nice than www.mysite.org/index.php?id=261765&action=edit&foo=bar&bah=antani :-). The first one is easier to read to both humans and search engines
There surely are also something I don’t like, in particular translations support as Mambo is done only with a third party add-on that I truly would like having it included in the official version. Moreover, I believe that the database structure could have been fairly simpler, I still haven’t studied the internals well, but that 55 tables madness sounds me strange.
- CMS made simple
- This, as his name suggests, is fairly more simpler and young than the previous two, but I think it’s one of the more promising. It’s a no-nonsense cms that supports articles ordered into a simple hierarchy (yeah :)), news (with rss, as all the others) and a simple search function. There are some things I would see in a future version and I would absolutely love it:
- Maybe the news should be also articles in the hierarchy.
- Translation of the content of course.
- Ability to add more richer content (like the example of the e-commerce before, but it could be some other thing)
- Mediawiki
- And now His Majesty Mediawiki, that runs Wikipedia, my daily drug :). This being a wiki is a very differend kind of beast respect the previous ones. Even if I don’t know it very well (and I will surely remedy that) Let’s see some of the peculiarities that made me reflect a lot:
- Track changes: Due to its open (in the sense everybody can modify the content) nature every change made can be easily rolled back and an history of all changes is always accessible.
- Heavily based on search: this is mandatory if you have millions of entries like Wikipedia, but the latest version has a comfortable way to edit the sidebar with the pseudopage MediaWiki:Sidebar
- The articles are stored and edited in an own simple language rather than HTML. In this way they are easily edited in a faster way than HTML and it’s theoretically simpler to translate in other languages than HTML or future newer versions of HTML (the day it will be possible to download auto generated pdfs from wikipedia articles I will be the happier child in the world:)
- the relations/hierarchies between articles (think about the taxonomy of the plants or animals in Wikipedia) are computed from the article content rather from esplicitly specified table fields. This leads to a more flexible and fast way to categorize the articles. In order to avoid performance problems the article structure must be parsed at publishing time and the relations must be put in the database anyway.
Web 2.0? Does a Web 1.0 ever existed?
Today there isn’t a buzzword so buzz how “Web 2.0” is. But it’s still a very vaporous concept. To me even a “web 1.0” still not exists, because we are still in a 0.someting era, an immediate post “this site is optimized for internet explorer at 800×600 resolution”. At the moment there isn’t yet a clear agreement how the web should look like in the future. Somebody has even tried to make a validator that checks if your site is web2.0 compliant, but IMO it has some serious problems. First, it binds himself too much with the concept of the blog “ex.Has a Blogline blogroll?”, but altough the blogs became an important part of the internet they are not and will not be the internet. It also attempts to bind on a specify language (“Appears to be built using Ruby on Rails?”), and the strength of the internet since its creation has always been its platform Independence (even with the various hijack attempts of Microsoft).
The leading group is the Semantic Web, part of the W3c that works on some things very interesting with the constant risk being too much academic (with academic <=> useless and complex). The work that started it all was RDF, that is a simple and very generic XML schema for representing subject predicate object constructs. It’s not meant to be used alone but with RDF dictionaries ad so it become rss, owl and many others. So rss has many incompatible variants because RDF is very generic.
So how is the perfect website?
After all why managing well hierarchies is so important?
Yeah, I know, hierarchies are an absolutely not intuitive thing and these will probably help only few user to navigate the site (have you ever used the sitemap to navigate a site? I hadn’t), because that is the search that is becoming more and more important, because it’s easier and more natural. But I think a clean hierarchy helps the one who creates the site to make a more organic work, but that’s not the most important thing. If a hierarchy (or better: an ontology, that is in veery poor words a collection of objects and relations between object) is exported in a clear and formal language like OWL or some other (hopefully simpler) RDF dictionary would help some cool automated tasks, like helping the search engines to understand what the page relations are (maybe there could be some kind of relation between two pages even if there aren’t hyperlinks between them) or syndicating not only the page of news (RSS) but also the other content.
A cms? a wiki?
I think one engine should be able to manage “content” in every ways the administrator wants to display it. So an engine should be every of these things together, the final aspect/behavior is details.
How the content should be edited/stored on the server
And most important: what format should be used? HTML would be the more obvious answer, but I think the ability of download a page/article in more rich formats like pdf or opendocument is very important, and is also important to remember that HTML is not a fixed entity and will change even radically in the next few years (when somebody will support it is another discourse 🙂 ).
So in order to create a system a little more future-proof it is necessary to make the storing format on the server pluggable and some candidates pops into my mind:
- Mediawiki format: very simple even to be written by hand and I think it contains enough information to be converted into a well printable format. Probably it would be also easy to write a WYSIWYG editor for it.
- Docbook (that will be translated in HTML and other final formats by an XSLT stylesheet): very complex and expressive, but too ugly editors.
- Open document: very complex and expressive, publishing a new article would be as easy as uploading an OpenOffice file. But producing decent HTML would probably be hard and computationally expensive (it would be necessary caching everything).
And off course since HTML is not the alpha and the omega of representing information it should be possible to download the page in as many different format as possible. You want an HTML page? www.notmart.org/foo/bar.html. You want a PDF file? www.notmart.org/foo/bar.pdf and so on.
How the user should publish content
This should be an easy procedure for everybody, even the most computer-illiterated and of course it should be totally platform independent.
It would be nice having both possibilities to edit the site content both with a normal web browser or a specified client: think about XML-RPC and the flock blogging interface, but something of course not only limited to blogging. Obviously the ad-hoc client won’t be the only way to edit content, in order to maintain total platform Independence a web based interface should be always available (and of course somebody will hate the graphical client and some others will hate the web interface too, so the choice must be always present)
If i would like AJAX or not for the web based interface i am not sure: some things like Google Maps are cool but try to use them with a 56k modem or with an old web browser :-). So if advanced javascript, AJAX, XFORMS (when in the year 2050 some browser will support it 🙂 ) and other buzzword are being used there should be always a plain old HTML version like Gmail.
That’s it 🙂
Here are my random thoughts about the web. Maybe if some days I will have some time I will try to implement some of these ideas, but probably I won’t have time and probably tomorrow I will totally have changed my mind on this argument. In the meantime I will continue to search the perfect system that I’m sure it is somewhere out there 🙂
University, PHP and everything
The end of the university is near (or at least the first level: silly italian 3+2). It makes me less depressed than the usual about how i’m chronicaly late with the life. Since i’m a PHP-junkie i was quite lucky having the occasion to write a PHP related thesis.
I will not bore everybody whith the uber-boring thesis argument (a dictionary of legal terms, of course web-based otherwise it would lose the cool factor:), anyway maybe one day when everything will be finished i will publish it here… or not, who knows :)). Writing that thingie has let me learn many small things on what my evil plans for the ultimate website/content manager are.
So be prepared for a long and grammatically poor rant about many trendy buzzwords like PHP, cms, wikis, web 2.0 and world domination 😀
New hosting launched
Today the migration to www.notmart.org should be complete. I have also added some new sections and made some modifications ti the engine, so there could be a bunch of problems, php errors and broken downloads, so please be patient 🙂
New hosting
In a couple of days I should finish the migration of this site to a new host, so this site is about to become www.notmart.org.
I wish to thank my friend fosk for hosting me for over a year 🙂