The Journey to Storing SMTP Passwords in a Database

| Comments

Back in the days when spam was not a thing, and the internet was simpler, if you wanted to give users an email address under your domain, you’d just add a forward to your mail server configuration. That took care of the receiving side, and sending could usually be done with whatever mail server people already had. Nobody bothered checking the envelope sender or From header anyway, and mail servers would happily accept mail from everyone and everywhere as long as it seemed that it had ended up in the right place. And it was good. And then along came spam.

SPF and DKIM? You need to run your own SMTP.

Now, of course, this is not a theoretical example. MacPorts has always provided its project members with an email alias under However, to fight spam, smart people came up with a multitude of ways to figure out whether mail received by a mail server was actually sent by who the envelope claimed to have sent it. There are currently two major mechanisms for this purpose: Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM).

SPF allows administrators to publish a list of servers that are permitted to send mail on behalf of a specific domain. Of course, since MacPorts did not actually provide an SMTP server and expected our developers to use their own ones, we had no way of gathering such a list and would thus allow the entire internet to send mail on behalf of, something more and more mail providers are nowadays treating as an indicator for spam.

DKIM, on the other hand, adds a cryptographic signature to certain selected fields of an email when it passes through the outgoing server, to be verified against a public key published in DNS on the receiving end. But again, since there was no single central SMTP serving, we could not ensure that all mails had such a signature, and thus could not enable DKIM – which providers are also using as an indication for spam.

We did know for a while that we would eventually have to setup email submission, but have been delaying the actual setup, since we needed a way to configure the passwords that should be used for SMTP. Since MacPorts' migration to GitHub in October 2016, we only use GitHub’s OAuth2 for authentication. And while mail clients are slowly implementing support for that in SMTP and IMAP, it is not yet widespread enough to be usable in our case.

So, my todo list came down to

  • Write a web application that uses GitHub OAuth2 to authenticate users
  • Allow setting the SMTP password in a database from that web application. I figured a database would be a good idea, since it’s the most convenient resource to access from different unix users, unlike files and/or sockets where I would have had to configure groups.
  • Configure SMTP authentication against the passwords in the database into Postfix.

Sounds simple enough. Boy, was I wrong…

Attending the Google Summer of Code Mentor Summit

| Comments

From Friday, October 13th to Sunday, October 15th 2017 I had the opportunity to attend the Google Summer of Code Mentor Summit in Sunnyvale, CA. This is a summary of my experiences.

If you are not familiar with Google Summer of Code, it is a program where university students spend the summer working for an open source project. Google pays the students as a way to give back to open source, which is heavily used in their products. Students are mentored by experienced developers from the projects and Google invites two mentors per project into the US in autumn each year for an unconference-style summit.

Together with Mojca Miklavec I mentored Zero King, who did a great job at implementing better usability for GitHub pull requests for MacPorts by setting up Travis CI and a PR helper bot. Our original plan was to attend the Mentor Summit with 2015 student and this year organization administrator Jackson Isaac, but unfortunately his visa to the US was denied and Mojca stepped in instead.

Baidu Spider Caused More Than 80% of Our Trac's HTTP Traffic

| Comments

Since MacPorts' move off Apple’s MacOSForge in October, we have been running MacPorts' Trac installation on our own infrastructure. We used to rely on server and admin time generously donated by Apple. Now that we no longer enjoy this luxury, we are on our own when it comes to keeping our infrastructure running.

For a few months, we were bedeviled by high server load apparently caused by our Trac installation and had a hard time figuring out the cause. Our monitoring showed a large number of HTTP requests and Trac’s response time would regularly take a nose-dive as soon as the backup started.

After a few attempts of tuning various knobs without too much success, I finally decided to grab the Apache access logs and run awstats on them. Since we rotate our access logs biweekly1 I only had 10 days of February for analysis, but even those 10 days revealed some pretty interesting data.

CVE-2015-0842, CVE-2015-0843 in Yubiserver

| Comments

Back in March 2015, I reported a security issue in Yubiserver, a small specialized HTTP server to verify HOTP/OATH tokens generated by Yubico’s Yubikeys. I’m publishing the details for reference.

I was looking for a new Yubikey validation server and, given its small size, decided to code review any candidates due to their small size. While looking at yubiserve, I found security issues in the code.

Secure Erase on OS X El Capitan

| Comments

With the update to OS X El Capitan, Apple has rewritten Disk Utility. The pre-10.11 Disk Utility used to have an option to securely erase a disk – a feature I needed because I plan to throw a faulty disk away.

El Capitan Disk Utility erase dialog without security options button Now, Apple still documents the option in KB article PH22241, but has implemented code that hides the “Security Options” button in certain situations. Unfortunately, they did not document which conditions need to be fulfilled for the button to be shown, which leads to the situation that I do not see it on any of my disks. My guess would be that the option is not available for encrypted disks, but since I do not have any unencrypted drives I cannot verify that assumption.

Why would you wipe an encrypted disk?

For an encrypted volume, wiping the header that contains the master encryption key should be enough to ensure that no data can be recovered. Conveniently, Apple does not provide an option to wipe the volume’s encryption header and documentation on Apple’s CoreStorage format it scarce, which means I don’t know where the header actually is. So a full wipe it is.

Luckily just because the GUI does not support the feature anymore does not mean that it cannot be done. The diskutil command line tool still has a secureErase option that supports overwriting entire volumes. Because I was doing this with CoreStorage volumes, I first had to delete that volume before secureErase would unmount the physical disk below:

Deleting a CoreStorage volume
:) clemens@cBookPro:~$ diskutil cs deleteVolume CD3D75E0-F317-42B6-B44F-FDCB1A9448CD
The Core Storage Logical Volume UUID is CD3D75E0-F317-42B6-B44F-FDCB1A9448CD
Started CoreStorage operation on disk7 cTM
Unmounting disk7
Removing Logical Volume from Logical Volume Group
Finished CoreStorage operation on disk7 cTM

Once the logical volume was gone, I was able to start the wipe with diskutil secureErase:

Securely erasing a disk in OS X El Capitan
:) clemens@cBookPro:~$ diskutil secureErase
Usage:  diskutil secureErase [freespace] level MountPoint|DiskIdentifier|DeviceNode
Securely erases either a whole disk or a volume's freespace.
Level should be one of the following:
        0 - Single-pass zeros.
        1 - Single-pass random numbers.
        2 - US DoD 7-pass secure erase.
        3 - Gutmann algorithm 35-pass secure erase.
        4 - US DoE 3-pass secure erase.
Ownership of the affected disk is required.
Note: Level 2, 3, or 4 secure erases can take an extremely long time.
:( clemens@cBookPro:~$ diskutil secureErase 2 disk4
Started erase on disk4
Pass: 1
Pass: 2
Pass: 3
Pass: 4
[ - 0%..10%..20%..30%..40%..50%.......................... ] 52% 25:03:07

I did a little research that suggests that a single wipe is sufficient to prevent data recovery on modern disks, so the DoD 7-pass I used might seem like overdoing it, but since I’m throwing the disk out because it was causing write errors I am also using this as a last benchmark to see if it would trash the disk completely.

OnePlus One Review

| Comments

OnePlus logo on the OnePlus Oneʼs packaging

The One by OnePlus is a flagship phone designed and produced by the Chinese startup OnePlus founded in December 2013. Only a few months later, the company announced the phone in April 2014. The astonishing speed can be explained if you know that the company’s founder, Pete Lau, previously was Vice President at Oppo Electronics and is no newcomer to the smartphone business.

The phone’s specs are clearly targeted at the high-end market. For example, it features a 2.5 GHz quad-core CPU, 3 gigabytes of DDR3 RAM, a 1080p IPS display and a 3100 mAh battery. The official website has the details – there really is no point in repeating all of them here.

I swear, it’s that large

There obviously already is a myriad of reviews on the OnePlus One (for example on YouTube), so I’ll just skip ahead to the points that are relevant to me as computer scientist and the features that surprised me. My biggest concern when ordering the phone was its size. At 5.5 inch, the screen is huge, after all. I was pleasantly surprised to see the 15.3 x 7.6 cm phone fitting in my front pocket comfortably. It does get a little cumbersome at times while driving, but that’s entirely manageable and only manifests itself during long drives. On the contrary, it was interesting to see how quickly I adjusted to the available screen real estate. Even before I actually switched my SIM over to the new phone, I was asking myself why I bothered for so long with the vile 4.3 inch, 480x800 screen of my old HTC Desire HD.

Goodbye University, Hello Professional Life

| Comments

Part of my university diploma.

A period of my life is coming to an end. Yesterday’s mail made that all the more obvious to me, since it contained my university diploma. I have now officially graduated Friedrich-Alexander-University of Erlangen-Nuremberg with a Master’s degree in computer science. This is reason for celebration, especially since I managed to pass with distinction, but it is also an opportunity to look back. Since I will not stay at university or in Erlangen, graduation comes with a farewell.

I have enjoyed the last few years in Erlangen, especially at the System Software Group and its KESO Research Project where I wrote my Master’s thesis on “Compiler-Assisted Memory Management Using Escape Analysis in the KESO JVM”. However, in the last few months in Erlangen I’ve realized that it was time to move on and seek new challenges. And I have.

On September 1st I will take up a job as “software integrator Linux” at BMW Car IT in the city of Ulm. I’m hoping my experience with continuous integration from KESO, package management, and build systems from MacPorts may be helpful at my position. I’m really looking forward to working for BMW and moving to Ulm, and what I’ve seen so far has been fantastic! :-)

Off to pastures new!

What's New in MacPorts 2.3.0

| Comments

MacPorts 2.3.0 has been released. But what’s new for users, and why should they use the new features?

This release contains a lot of changes under the hood that users probably won’t notice. For example, MacPorts no longer uses the system-provided version of Tcl, but ships its own copy. That might seem like a step backward at the first glance, but simplifies compatibility with older systems such as Tiger or Leopard (hello, PPC users), allows us to clean up some of the cruft in the codebase and fix some long-standing issues like signal handling in future releases.

Another change most users won’t notice is the use of HTTP pipelining (I know, I know, what took us so long?), which should be beneficial especially when downloading a lot of binary packages from our mirrors. Also related to downloads, but very much noticeable are the new progress bars. Previously download progress information was only available when run in verbose mode, but 2.3.0 comes with a nice progress indicator for downloads taking longer than a few seconds. You’ll also see the same progress bar in rev-upgrade, which previously indicated its progress using a simple percentage number.

One of the changes I’ve been waiting for (and working on) is “trace mode”. Trace mode is a poor man’s sandbox initially developed for the darwinbuild project at Apple. It is based on library preloading, a technique known from Linux systems using the environment variable LD_PRELOAD. That makes it inherently insecure, but since security (i.e. protection against malicious attackers) has never been a goal for this sandbox, that’s not critical. Trace mode adjusts the environment of a build in MacPorts by hiding all files that shouldn’t be there in a vanilla installation of OS X and files in the MacPorts prefix that aren’t installed by a dependency of the current port. Trace mode is a great tool for both port authors and users: Missing dependencies are easily identified and files in /usr/local can no longer interfere with a MacPorts build with trace mode enabled. This last point is especially important since lots of third party installers and other package managers (looking at you, homebrew) install files in /usr/local. The next time a port fails to build for you, clean and re-try with port -t instead.

Other minor, but helpful new features include a check for the presence of the Xcode Command Line Tools and Xcode license agreement acceptance and a helpful new overview for the select feature at port select --summary.

Constant Resyncs With Windows 7 Software RAID

| Comments

Despite my switch to a MacBook Pro almost six years ago I still have a Windows box I occasionally use, mostly when I forget to bring my MacBook’s PSU (which happens surprisingly often, despite having two of them for exactly that reason).

Since disk space on rotational drives is cheap these days I switched to a RAID 1 configuration when I last upgraded the hardware in said computer. I went with two Western Digital WD20EARX 2 TB drives and first tried my mainboard’s fake RAID (AMD RAIDXpert on an AMD SB710 chipset). Long story short, I was unsatisfied with the performance and afraid of data loss in case my mainboard dies and I couldn’t get one with the same chipset.

I’ve seen software RAID on Linux and it was working much better for me than my mainboard’s attempt at it, so I figured I’d try the software RAID implemented in Windows 7 (Professional, Enterprise and Ultimate only). Simple to setup and with acceptable sync speeds I thought I had found what I was looking for – but then it seemed every time I rebooted, the disk array would be inconsistent and resync from scratch. Needless to say, performance was plummeting. The machine would hang for seconds waiting for I/O and every reboot would make it all start over. Even worse, the resync couldn’t be aborted and the disk array couldn’t be disbanded either (who at Microsoft thought that was a good idea?).

Turns out the culprit was a known one. KB 2913050 says “Mirrored RAID volumes report Resynching status after you restart Windows 7 […]”, and it would happen after each hotfix package installation, so for infrequently used computers basically every (re-)boot. I especially liked the “resolution”:

Microsoft is aware of this issue and intends to address it in a future release of Windows.


It’s broken, but we’re not going to bother fixing it in Windows 7. Give us some money, if you want RAID support.

Or you could set the following magic registry keys, but we’re not going to tell you what they do, how they affect the Volume Shadow Copy Service or why we’re not setting them with a hotfix for everybody.

Please, somebody remind me why I thought Microsoft had a good reputation for their support of business-grade software…

Autoconf: AC_CONFIG_SUBDIRS With Custom Flags for Subprojects

| Comments

Starting with version 2.3.0, MacPorts will use its own copy of Tcl rather than relying on the Tcl shipped by Apple with OS X. Since MacPorts still works on versions of OS X down to Tiger that only have Tcl 8.4, all features introduced in Tcl 8.5 have been off limits and workarounds had to be used. That turned out to be unsatisfactory, especially avoiding {*} argument expansion (with the ugly workaround of using eval).

The idea of bundling a private copy of Tcl first came up in July 2013 on the macports-dev mailing list, originally in the context of the Apple distribution of Tcl changing in OS X Mavericks in a way that would no longer allow MacPorts to build from source if the optional Command Line Tools package wasn’t installed.

The Problem

MacPorts uses GNU autoconf in its build system. GNU autoconf supports bundling dependencies in subdirectories using the AC_CONFIG_SUBDIRS macro – but it wasn’t sufficient for two reasons:

  • The Tcl configure script creates a file that is needed by MacPorts' configure to find the correct Tcl interpreter and build setup. AC_CONFIG_SUBDIRS will delay configuring subpackages to the very end, but we needed it to be done earlier.
  • AC_CONFIG_SUBDIRS will always pass the same arguments given to the main configure script, which includes the prefix setting. That would install our local copy of Tcl to a location that’s being used by the MacPorts tcl port (which is version 8.6 and at the moment incompatible with some of the MacPorts code).

There have been a few attempts at solving similar problems, one of which is a patch against autoconf and has been sent to the autoconf mailinglist in April 2011, but was apparently never applied. I didn’t want to require a patched autoconf to generate the MacPorts configure script either, so applying the patch was not an option.

The Solution

An unsolved problem in a technology I haven’t used a lot yet? That sounded like a great opportunity to learn something new – and so I wrote the missing macro, mostly by reading and copying the source of AC_CONFIG_SUBDIRS and adjusting it where needed. I added extracting from a tarball so I didn’t have to commit the extracted Tcl sources. MP_CONFIG_TARBALL (source) takes the path to a tarball, the directory that’s created by extracting the tarball that contains the configure script and a list of configure parameters to pass to the subproject. Each given configure parameter overrides those given on the main project’s command line and preserves those that have not been overwritten like AC_CONFIG_SUBDIRS would.