After reading the Every Email In UK To Be Monitored article and its comments over at Slashdot I once again felt like encrypting each and every Email I send using GPG/PGP. Now for this encryption to work the person I am sending a message to would need to have GPG/PGP set up too. A lot of technical-minded people already have this set up, but I can not expect everyone to be using encryption.
The reason for not everyone using GPG/PGP for encrypting their emails might be that, even though GPG/PGP have become a lot more usable for the end-user in the last few years, these programs are probably still too technical and thus hard to understand for non-technical users.
This is when I thought a little about how people could be made using public key encryption for E-Mails. After a bit of brain-storming an idea came to my mind, an idea I would like to present you with.
Basic idea
What about creating a program acting as both SMTP and POP3/IMAP proxy server that included all the logic to do encryption and would encrypt/decrypt messages transparently?
If this logic was moved out of Email clients we could get a solution working universally for each and every Email client out there.
How this could work
Imagine you sending an email to someone you've never sent an email to. You write the message in your Email client as you are used to and hit the send button. Now, instead of connecting to your SMTP server the E-Mail client would connect to the Email proxy program and submit the message there.
At this point the program would check the email sender and recipient. If the sender does have a public/private key pair and the recipient's public key is known the program would prompt you for the passphrase to your encryption key. After entering the passphrase and hitting a button (send, sign, encrypt, I guess you can think of a more appropriate name) again the message would be encrypted and then forwarded to your SMTP server.
On the other hand, if the public key of the recipient is not known (and cannot be fetched off key servers) the program could send a message informing the recipient that you wanted to encrypt your email, but were unable to do so, explain that this program exists, where to get it from, how to set it up, why encryption is important, and so on. I can imagine having a hard-fail mode, sending only this message and a soft-fail mode, attaching or including the automatically generated message somehow (attach it, inline it, etc.) to the original message. Either way, the generated message should be cryptographically signed.
Receiving mail would work the other way around. The proxy would try to fetch messages off all configured IMAP/POP3 servers on its own, check if they are signed. If a signed message arrives the public key should be, if not already done, be imported into the local keyring. As for encrypted messages this should happen the same way, plus decrypting the message.
The Email client would connect to the IMAP/POP3 proxy server and fetch (the decrypted) messages from it. Both unencrypted and unsigned messages should be marked somehow (think subject re-writing here and maybe adding an X- header). However, no automatic sending of emails should happen when receiving messages as the From header could be forged (spam anyone?).
Features of the program
The program I have in mind should include the following features:
- GPG key management (creating, distribution to keyservers, etc).
- Automatic encryption/decryption and signing/checking signatures.
- Non-technical, so everyone can use it.
- Support multiple IMAP/POP3 and SMTP servers, so it can act as a central point for storing all Emails a user could receive.
- Cross-platform functionality (Java? Python?)
- Free Software
PlansI would love to implement this program, but fear that this could be way too much work for a single person. If you are interested in helping with the implementation or simply have any comments feel free to either drop me an email at
blog at sp dot or dot at or use the blog's comment function.
I hope I did explain my idea clear enough and did not miss anything.
Happy hacking!
Comments
The European Parliament (EP) has just recently started a new service: EuroparlTV. A web-TV service which should give citizens of the European Union (actually everyone around the world) a way to inform themselves about how the EP works, what it does, and so on.
After I first read these news over at heise (german) I was impressed, but started to fear that yet again some sort of government has invested in proprietary software and is able to bring its services only to users of such software. Seconds later my fears became reality.
EuroparlTV seems to work only for users of either Adobe's proprietary Flash player (via the proprietary Adobe Flash file format) or users of Microsoft's Windows Media Player (via the proprietary WMV file format).
What this means to an open web, that is usable for everyone, should be clear.
Basically this is a service all citizens of the European Union pay for, but some cannot use. Is this really how governments (and the EP is some sort of government) should treat their citizens? Rather not.
On the one hand the European Commission is fighting vendor lock-in and monopoles, but on the other hand it directly helps these vendors by creating such services. Not a smart move in my opinion, neither is it understandable.
What I am asking myself though is why the EP was unable to create such a service, which itself could be quite interesting, without having all users of that service use proprietary software?
Is it so hard to deliver the service in a free (as in freedom), standardized format?
I will let answering these questions to you, but keep in mind that there are alternatives to this whole proprietary mess, like Ogg, which are completly free.
Personally I am pretty disappointed by this move. However, I hope that I at least informed people that there is a problem with EuroparlTV.
Putting it simple and short this way the EP does a great deal with helping vendor lock-in whilst fighting the freedom of its own citizens. Even though it should be the other way round.
Comments
Even though this is meant to be an introduction to sptest, I want to start off by letting you know why I wrote this extension to the Python unittest module.
I am currently working on a (still private) project that uses Python's unittest module and the underlying framework. Even though unittest is a great utility for creating unit tests I found that the output it generates is unusable for me. I wanted something different though. Maybe a bit more aesthetic than the simple command line output unittest provides.
So I started off writing a class extending unittest.TestResult to fit my needs. I soon realized that interfacing with this part of unittest is not as easy as it could be, but I still continued to write my class.
After two hours of hacking I noticed that this class had become a monster. It was huge and I felt uncomfortable having such a huge class lying around somewhere in a "runtests.py" file for the only reason of having that pretty output.
This was the point when I decided to move all that code into a separate project and try to come up with a more intuitive API. This was the second when sptest was born, about 5 hours ago.
What I did come up with is a small Python module that makes customizing the way unit test results are presented (or stored) easier. It currently includes two output handler classes. One providing fancy CLI output on ANSI terminals and the other one providing XML output.
Additional output handler classes could store the result of the unit tests in a database or send it to a central point on the network, but implementing that is up to someone else, for now.
Running unit tests with sptest is as simple as calling:
sptest.TestMain(TestSuite).run()
By default the FancyCLIOutput handler class will be invoked and you will see why the handler is called the way it is immediatly.
In order to generate an XML file containing the test results one just has to modify the call to
sptest to look like this:
sptest.TestMain(TestSuite, output_class=sptest.output.XMLOutput).run()
sptest also provides support for preparation and cleanup functions. The only thing you have to do is define these functions and adjust the arguments passed to
TestMain accordingly.
Most of the code is already documented and a doxygen configuration file for generating the html documentation comes with the code. Also,
two examples are included that show how to use sptest.
Comments
UPDATE: You can find the update to this article at its bottom.
Even though Google's slogan is
"don't be evil" I am not entirely sure whether this also applies to their newest development: the Google Chrome browser.
The
announcement over at the
Official Google Blog tells us that Google is about to release a Free Software-based browser. When I first read the announcement I wasn't too impressed reading that Google has actually built a browser, this was logical and I have been expecting this move for years. Also, reading that they based their browser on Free Software didn't impress me too much either, but then I found
the comic.
The comic contains a lot of information about the browser's architecture and I like the design. It makes perfectly sense, even though it could create some memory and processing overhead, but don't all major browsers consume "quite some" ressources? So, from a technical point of view, the browser sounds great, but there is a huge downside too.
The product announcement says that the browser is not only built upon Free Software, but is Free Software itself. Now, this sounds good, but then I had to read this:
This is just the beginning -- Google Chrome is far from done. We're releasing this beta for Windows to start the broader discussion and hear from you as quickly as possible. We're hard at work building versions for Mac and Linux too, and will continue to make it even faster and more robust.
I don't want to start nit-picking on the use of the term "Linux" for describing the GNU/Linux operating system there, even though I have to mention this fact.
What really bothers me is that it seems as if a binary-only release for Windows is being prepared, and only this binary version. In my opinion this is bad. I would rather have liked reading "a binary beta version for Windows will be made available along with the source code licensed under the terms of the <insert your favourite Free Software license here>".
Why? Because this way people could start tinkering with the code and thus help making a GNU/Linux version available sooner. Not seeing the code released makes the "Free Software" promise sound void.
Though nothing has happened yet. Google has merely announced the upcoming release of Google Chrome. No details have been made available whether the code will be released along with the Windows binary, but I fear we won't be getting hold of the code for a while.
This leads me to the title of this article: Is Google Chrome good or evil?
Well, if Google keeps the promise to release Chrome under a Free Software license and does so rather sooner than later I believe Google Chrome should not only be called "good". It would then qualify as a real alternative to Mozilla Firefox and could even be superior to Firefox.
On the other hand, if Google does not release the code timely, releases the code under a proprietary license or does not release the code at all Chrome could and possibly should be tagged "evil".
Personally I am awaiting the release of Google Chrome. I would like to test it, see the code, maybe dig a bit into it and possibly make it my browser-of-choice. The reason for this is quite simple: I am tech-savvy and the technology used in Google Chrome sounds more than just interesting, but could actually be a step forward for the web. Both in increased usability for the user and the use of Free Software and Free Standards as a way to help the web evolve. If Google doesn't keep the Free Software promise though, expect me not to ever though that evil beast.
UPDATE (September 3, 2008 at 7:44am CET):
Now that Chrome has been
released Google apparently did also release the source code to Chrome, Chromium. The chromium project page can be found
here, the Google Chrome home page
here.
Now it seems as if Google did make Chromium a Free Software browser (seems because I have not yet come around to downloading the tarball and checking the contents, but I do believe it actually is Free Software and for me there is no reason not to believe that anymore).
I am more than just happy with this because, as I pointed out in this article already, Google Chrome or Chromium does have an interesting architecture and should, in my opinion, be embraced by the Free Software community. The reason I am happy is not only the fact that it is Free Software, but rather that a company like Google does release a lot Free Software these days and personally I hope other companies will start following this example soon.
Thanks Google for taking this step!So to make it short:
Google Chrome? Not evil, good!Now a short word to the commenters of this article: Most comments have been helpful and I really appreciated them. Sorry that an update to this article took so long, but I'm living in Europe and was asleep while all the things you mentioned have happened.
Comments
I am currently writing a Python application that makes use of GNU Autotools as build system and noticed that determining whether a specific Python module is installed is not that easy and no usable Autoconf macro exists. So I came up with my own solution, which I would like to share with you.
The AC_CHECK_PYTHON_MODULE macro takes two arguments: The module name and optionally the variable name holding version information. This way it is not only possible to determine whether a module is installed (ie. loads in Python) on the current system, but also retrieve version information from that module.
The following examples checks whether the Crypto module is installed and retrieves its version information from Crypto.__version__:
AC_CHECK_PYTHON_MODULE(Crypto, __version__)
The macro itself does never report and error, but rather only a found/not found result. Error checking is up to the user and can be done via these two Autoconf variables:
- PYTHON_<MODULE_NAME>
- PYTHON_<MODULE_NAME>_VERSION
PYTHON_
<MODULE_NAME> is set to
"1" if the module is present and
"0" if not present.
PYTHON_
<MODULE_NAME>_VERSION is only set when the version variable argument has been set and contains the version information of the module, if the module been found. If the module is not present this variable is also set to
"0".
The version variable argument is optional as I wrote, so the following invocation works too and only checks whether the distutils module is present:
AC_CHECK_PYTHON_MODULE(distutils)
As I wrote earlier in this article I would like to share this macro with you. You can download it
here.
Comments
I have recently bought a new laptop, a Samsung P55-Pro T8100 Sevesh. As I was not able to find an installation report for this model anywhere on the internet I thought writing one myself is a good idea. This way people interested in getting this laptop or installing GNU/Linux on it can get some information.
The article covers both the hardware configuration of the laptop itself, a list of which features of the laptop do work and which don't (do not be afraid, most things work perfectly well out of the box) and finally a short installation report.
First of all, let's have a look at the hardware configuration of this laptop:
- Intel Core 2 Duo T8100 CPU (2.1 GHz)
- 2GiB DDR2 RAM (PC2-5300 - 667MHz)
- Intel GM965 chipset with integrated Intel GMA X3100 graphics adapter
- 250GiB HDD
- SXGA+ display with a resolution of 1400x1050
- Intel PRO/Wireless 3945ABG WiFi adapter
- Intel 82566MC NIC
- HD Audio Codec, ALC262 sound adapter
- AuthenTec AES1600 fingerprint reader
- Infineon TPM module
- Ricoh cardbus bridge (RL5c476 II) plus cardreaders and IEEE1394 controller
- One cardbus (PCMCIA II) and one Express Card/54 slot
Now on to the list of what does and what doesn't work with GNU/Linux.
Intel GMA X3100 graphics adapterWorks out of the box. Full resolution is possible without a hack, VGA out works out of the box in both mirror and extended desktop mode.
NO xorg.conf modifications are needed in this setup, everything works perfectly well with a nearly empty xorg.conf!
The only thing I had to modify was making the virtual display a bit bigger so that extended desktop mode works with an external monitor having a resolution of 1680x1050 pixels.
Intel 82566MC NICWorks out of the box, no further configuration needed.
Intel PRO/Wireless 3945ABG WiFi adapterWorks with the iwl3945 driver, however, it requires something Intel calls "ucode", a proprietary firmware. Without this piece of firmware the card does not work. If you want to WiFi without the need for proprietary software (the ucode) you will have to go for a USB, PCMCIA or Express Card/54 WiFi adapter.
HD Audio Codec, ALC262 sound adapterWorks out of the box.
AuthenTec AES1600 fingerprint readerThe fingerprint reader is said to be working with
fprint, which I did not test yet though. Expect an update sometime soon.
Infineon TPM moduleNot tested.
INSTALLATION REPORTBasically the Debian GNU/Linux 5.0 installation went smoothly using the beta2 netinstaller image. The system booted from cdrom and the installation process worked fine.
After rebooting into the new system however the system froze. No response, nothing. The last message on the screen suggested that the ACPI video module is the problem.
After rebooting using
init=/bin/sh as boot argument I modified
/etc/modprobe.d/blacklist and added the following line:
blacklist video
This is only a workaround for the real problem. The bug is present in Linux 2.6.25 and Linux 2.6.26. A bug report has been filed (
here). I will update this page as soon as the problem has been resolved.
There is another thing which doesn't seem to work. However, this could be (and likely is) related to the broken ACPI video kernel module: adjusting the display brightness.
On AC power the system boots with maximum brightness, which cannot be adjusted. Unplugging the AC adapter lowers the brightness.
When running on battery one can use the "brightness up" key combination to switch to maximum brightness, however, this
cannot be undone.
CONCLUSIONThe laptop is not only usable under GNU/Linux but most hardware works, even out of the box. The only real problem is the broken ACPI video module, which hopefully gets fixed soon.
I hope this article helps those who would like to get one of these laptops, but are not sure of its GNU/Linux compatibility, just like I was.
Comments
[digg=http://digg.com/security/Is_trying_to_fix_E_SMTP_really_worth_it_part_2]This article is the second in my series about the flaws of (E)SMTP, the whole Internet mail infrastructure and how it could possibly be fixed. The main focus of this part is a new approach to the infrastructure which should help making emailing more secure, reliable and less spam-prone.
The first article can be found here and points out flaws and problems in the current systems.
Before going into detail about how the infrastructure could look like I would like to point out the goals of my proposal:
- security through end-to-end encryption
- security through sender and server authentication
- integrity of message contents
- built-in load-balancing support
- getting rid of email forwards
These five major points should be covered directly by a new infrastructure and should be mandatory. There is no point in making any of these optional as the rest of this article should point out.
security through end-to-end encryptionEven though both SSL and TLS support exist for (E)SMTP these features are optional. In fact this means that it is possible that even though one submits his or her email over a secure channel the message could be transferred in plain-text somewhere on the way to its destination.
This enables an attacker to snoop at your message somewhere along its way. Whilst some people believe this is okay I strongly oppose to anyone being able to read either my private or business emails.
The solution to this problem is end-to-end encryption. The new infrastructure should make encryption of all message exchanged mandatory and further provide a way of encrypting the message contents. This way only the intended recipient can actually read the message (as in not even a server administrator having direct access to a user's mailbox). End-to-end encryption of the communication channels should be done by using TLS for all communication between all clients and servers and for server-to-server communication.
Encrypting of the message payload could be done in a similar (if not even the same) way
OpenPGP (RFC 4880) works.
security through sender and server authenticationThe next feature a possible SMTP successor should provide is sender and server authentication. As TLS should be mandatory for the implementation the easiest way to achieve this is using a
public key infrastructure. This could then in turn be used for multiple things, including message integrity checking, encryption of message contents, authentication of the sender and authentication of the server.
Integrating a public key infrastructure could be done by having special DNS (maybe TXT) records that contain the address of key servers. These key servers would store not only a domain-root certificate which would allow user and server authentication but also all user and server certificates themselves.
A receiving server could then check the sending domain's key server for both the domain-root certificate and the sending server and thus verify that the message is legitimate and actually originated from the specified domain.
Sender authentication works together with message integrity. Basically the receiving server opens the message, gets the client's message signature from the message, and asks the sending domain's key server for the public key of the sender. The receiving server then checks the signature and this way verifies the sender.
integrity of message contentsIntegrity checking is closely related to sender verification. As the receiving server checks the sender's message signature in the sender-verification process the message is automatically checked for integrity too.
built-in load-balancing supportLoad-balancing is also closely related to the PKI approach. The sending server could use the receiving domain's key server to locate the server to send the message to. This way load-balancing of receiving servers can easily be implemented. Furthermore load-balancing of multiple key servers for a single domain is possible using DNS round-robin records.
getting rid of email forwardsForwards can also be gotten rid of by using the receiving domain's key server similarily to the load-balancing approach. Instead of pointing the sender to a domain-local receiving server the key server could simply point the sender to another domain's receiving server. This way the message would not really be forwarded or relayed anymore but rather a pointer to where the message should be stored could be provided.
Putting everything togetherMaking all the mentioned features mandatory for a possible successor of SMTP should make users benefit in a few ways. Firstly, users could rely on both then integrity of the message, that the sender actually the person he or she pretends to be and the fact that snooping on the contents of the messages they send is hard to impossible.
Furthermore this infrastructure should make sending SPAM messages a lot harder as domains for sending spam would have to be bought, DNS servers and key servers would need to be operated and blocking unwanted messages could be as easy as blocking either a domain or a single user using the information provided through their message signature.
ISPs would benefit from the built-in load-balancing mechanisms and the mailbox alias feature (forwarder). Whilst the load-balancing technique simplifies set-up and operating of a load-balanced infrastructure the mailbox alias feature should help cutting down on traffic generated by email forwarders.
Please be aware that I intentionally left out all implementation specific details, such as the message exchange protocols. More technical aspects of a possible implementation are to be covered in the next parts of this series. As always, comments are highly appreciated.
Comments
It has been quite a while since I last wrote an article and published it here.
It's not like I got tired of blogging. The reason why there hasn't been an update for such a long time is that I was doing my final exams in the past two months.
After passing my exams on Friday I should have time to write some articles again, so watch out for new articles here.
Comments
This is one question I have been interested in ever since I started using GNU/Linux.
Just think about it for a moment. About 20 years ago you got specifications for pretty much every piece of hardware you bought. You were given exact instructions on how to use the hardware you just bought, not only how to install it. Things have changed since then.
If you buy any piece of hardware today you actually have to expect not to get any documentation on how to "talk" to your new toy. You are only given a CD (sometimes even only a link to a homepage) containing drivers for a few specific operating systems, usually only Microsoft Windows.
Now I am no driver hacker and so I probably wouldn't be able to implement a driver for anything on my own anyways, but the Free Software community would largely benefit from hardware documentation, as there are a lot of capable driver hackers out there.
This is not a problem that only affects the Free Software community though. There are a lot of pieces of hardware which do not work on recent proprietary operating systems anymore due to lack of support by its manufacturers.
At least this problem would not exist for Free Software operating systems, such as GNU/Linux, if hardware makers would publish documentation of their hardware. The people still using devices which are well beyond their end-of-life could implement drivers on their own, not being dependent on anyone.
What I am really wondering about in this case is why hardware companies are unable to coin standards for accessing devices of the same class. It works perfectly well for USB (take USB mass storage devices as an example) and I do not understand why there can't be standardized interfaces to other hardware, such as network adapters, as well. On a very-low level these standardized interfaces do work. Just think of PCI, PCI Express or AGP.
Actually, if you think about this for a few more seconds you should realize one thing: Having standardized interfaces for devices of the same class would cut a lot of costs for hardware makers. Why? Oh well, if they design a brand new networking chip and still implement the given standard there would be no need of writing a new driver. Wait, there would be no need for per-device drivers at all. Implementing a common driver that accesses the standardized interface would be enough, for a whole range of devices.
So what am I asking of hardware makers? I would love to see companies creating devices of the same class to get together, create standardized interfaces, publish them and implement them in their new devices.
I know, this is not likely to happen anytime soon, so a more realistic approach is asking for Free Software drivers and/or documentation.
Personally I have stopped buying hardware which "works" with GNU/Linux, I have come to the point where I try only to buy hardware which either comes with Free Software drivers from the manufacturer or documentation which allows implementation of Free Software drivers.
This is probably the best way of showing these companies what you demand: Freedom.
Comments
I was quite stunned when I noticed that the Free Software Foundation (FSF) has recently started a new monthly-published newsletter, called the Free Software Supporter.
The reason I was amazed is not the fact that the FSF is now publishing such a newsletter, but rather the fact that I did not hear about that yet. Basically, the Supporter is about informating the Free Software enthusiasts about recent happenings and the work of the FSF, the GNU project and the global Free Software community.
It seems as if I am not the only person that is excited about the supporter, as Joshua Gay, who apparently is writing the Supporter, also seems to like it, as he writes in a blog post:
I hope that you enjoy the Supporter. I am looking forward to reflecting each month upon the work of the FSF, the GNU project, and the global free software community. I only hope that the number of highlights I add each month will continue to grow as quickly as the community is growing. In either case, we hope to keep it short and we hope to keep you informed.
You can sign up to receive the Supporter via email on a monthly basis at
http://lists.gnu.org/mailman/listinfo/info-fsf and you can read the first issue online at
http://lists.gnu.org/archive/html/info-fsf/2008-03/msg00000.html.
Also, if the Supporter looks like an interesting read to you, you may as well enjoy the monthly newsletter the
FSF Europe publishes. The FSFE Newsletter can either be read
online or you can sign up for the FSF Europe
press-release mailing list.
Personally I believe both newsletters are worth reading and give you a great overview of what has happened in the past month, what is going to happen and the work done by the FSF and FSF Europe.
Comments