HECK YES.
Really excellent and clear explanation of what this really is all about. Hint: it has nothing to do with “net neutrality.” When I first read about the deal and saw the various reactions in the media, I had a feeling that people were overreacting to what simply amounted to nothing more than an interconnection agreement, but I wasn’t sure where to start digging for the details. Kudos to Dan Rayburn for doing all of the legwork. (via DF)
In Soviet Russia, cloud computing cancels YOU!
Seems reasonable. (via — of course — HN)
(Warning: NSFW language)
“So, when your Aunt asks why her 1.2GHz computer isn’t fast enough to run an online word processor that has the same f***ing features as the 1987 version of Corel [sic] WordPerfect, you don’t have an answer for her. There is no justification.”
Oh man. This one had me laughing so hard I was in tears. Not sure how this is the first time I’ve run across this (found via this project’s page which showed up on HN today; also, HN).
There is more than a kernel of truth in here, but I suspect the author was intentionally trolling. If he is in complete earnest, depending on how you view the market today, you could argue either that this is some tasty claim chowder or that it is right on the mark. After all, Chrome OS is actually a thing now — by which I simply mean that “it exists”, not that it is a phenomenon. The question is whether anybody really takes it seriously, or will take it seriously. I’d like to think that the war is over and “rich clients” have won, but it could just as easily be the case that web apps are only starting to find their footing now, and just you wait another 5-10 years, by golly. (*shudder*)
See also: “Uncle Bob” Martin’s fantastic rant from his RailsConf 2010 talk (well worth watching the whole thing).
…yet another example of why carriers should not be in the business of selling hardware.
“So you’re telling me there’s a chance!” (via HN)
If you want to sell ads, sell ads. Own it. Don’t try to coat it with a layer of frosting and tell me it’s a f***ing cupcake.
I have to agree with John’s take on how Mozilla has tried to spin this. It feels slimy and dishonest. At the same time — and without trying to excuse the way they are sugarcoating it — I read the description of what they are planning to do, and it sounds much less offensive than I initially assumed it would be. Firefox has a “tiles” feature that works much the same way that Safari’s “Top Sites” does, and the only place that “sponsored content” will appear on Firefox is within some of those tiles. “Ad-supported” software has such a bad reputation these days that when I first read that Firefox was planning to introduce ads, I assumed the worst: that ads were going to be in-your-face and could pop up at anytime during your browsing session. This appears not to be the case.
Whether or not the feature ends up being one that can be turned off, the “tiles” themselves can still be toggled off altogether.1 And since Firefox is open source, there is nothing preventing somebody from shipping a version that does not contain this new “feature” (in fact, it would shock me if nobody did this). I wouldn’t even be surprised if the code for “directory tiles” is not even checked into the public code repository by the Mozilla folks.
Still, I’m saddened both by the news as well as the way it was announced, mainly because it stinks of desperation to me. It is depressing to see an application that I once held in high esteem fall to such a low point that they feel the need to go through with something like this.
(Via HN.) Sobering and sad at the same time. I think what he says is right on the mark, but I absolutely detest the SaaS model. It might be great for the business, but I think it is terrible for the end-user, which is a subject I hope to get into more deeply at a later time.
So my dilemma is this: let’s say that, hypothetically, you have a software product idea. But the idea is for a product that would be introduced into a mature market that is already saturated. You hope it will be a disruptive product, but the only way you can see it gaining any kind of traction is if it is (or large parts of it are) open source. And you’re not interested in SaaS.
Is it impossible to build a self-sustaining business around that? Moving forward, is SaaS really the only way?
I recognize that so far, I haven’t managed to live up to my promise: I have not been posting much in the way of HOWTOs here. It turns out that thoroughly documenting the work you’ve done is both time-consuming and hard. Documentation has never been a particular strong-suit of mine, so this is a good exercise for me.
VPN Protocols
One subject I’ve wanted to address for a while is the state of VPN support on mobile platforms, both on the device side as well as on the network side. Both iOS and Android support PPTP, L2TP over IPsec, and raw IPsec. I have successfully used PPTP and L2TP/IPsec in the past, but a few months ago, my service provider made some changes to the way they gateway internet traffic. These changes affect more than just VPN traffic and have not been popular with users, and to be honest I’m none too pleased about them myself. The long and short of it is that GRE/IP and ESP/IP no longer work across their network, effectively preventing people from using PPTP, L2TP/IPsec, or IPsec by itself…the only three options that are supported by either mobile software platform.1
There are, of course, other options that exist for establishing secure network tunnels, but the trick is that whatever software platform you are using has to support it, as does the endpoint you want to connect to. The most desireable options are those that can “just work” regardless of the network you are connected to. NAT has become so prevalent in the IPv4 world that using a VPN protocol that either doesn’t work over NAT or is broken by certain NAT implementations (or which requires a NAT helper that the router performing the translation does not provide) is oftentimes just not worth bothering with.
In any discussion about VPN protocols, OpenVPN will inevitably come up simply because it is generally NAT-friendly and pretty flexible to boot; however, I’ve never been particularly fond of it. For one, the way it is configured and used feels very “foreign” to me in a way I can’t quite put into words, and the few experiences I have had with it left a bad taste in my mouth.2 The other problem with it is that practically nothing supports it out-of-the-box, and historically, attempts by third-parties to add support for OpenVPN to either platform have felt very grafted-on.3 Finally, in my particular case, I want to use RouterOS as my VPN access concentrator platform, and its OpenVPN support is currently limited to TCP mode, and TCP-over-TCP is a bad idea.
Despite some obvious security shortcomings, I personally think that the use of raw L2TP, unaccompanied by IPsec, is as good a compromise at this point as you’re going to find. It’s basically pure PPP running over pure UDP, so it doesn’t have most of the NAT issues that plague PPTP (GRE) or IPsec (ESP), and there are very few NAT engines that you will be unable to get your tunnel to punch through. Most router OSes and VPN access concentrators support it (it’s, y’know, kind-of a prerequisite for L2TP/IPsec support), and iOS and Android both have built-in support for L2TP already as well.
There’s just one problem: neither iOS nor Android inherently support using it the way we want to be able to use it. By default, they insist that L2TP must be paired with IPsec. Unlike OpenVPN, though, 99.9% of the software support for this is already present in the underlying OS…we just have to coax it into doing what we want.
Raw L2TP on iOS
The solution to getting iOS to establish an L2TP session without first setting up IPsec turns out to be fairly straightforward, assuming that you are using a jailbroken device. iOS is based on Darwin, which is certified UNIX, and just like OS X, iOS uses the same practically de-facto UNIX pppd
implemention that you run into almost everywhere else: the formerly known as ANU pppd pppd
.
pppd
expects most of the options for the connection to be supplied to it as direct arguments when it is invoked, but it will also grab arguments from the plain-text /etc/ppp/options
file, if present. The Darwin L2TP client is implemented as a pppd
plugin and can be passed arguments by way of pppd
as well. So if you create such a pppd
“options” file and stuff the following into it, you can leave the “Secret” field (IPsec PSK) blank when you configure your L2TP connection, and it will not try to negotiate IPsec between you and your access concentrator:
plugin L2TP.ppp
l2tpnoipsec
Incidentally, everything works exactly the same way on OS X, if you ever find yourself needing to do raw L2TP from a Mac.
Raw L2TP on Android
Before Ice Cream Sandwich (4.0), raw L2TP was actually one of the options presented when one went to configure a new VPN connection on Android. For some reason, Google decided to pull that option.4
Also just as unfortunate, trying to talk Android down from negotiating IPsec before making an L2TP connection is nowhere near as easy as stuffing a few arguments into the pppd
“options” file. Android also uses Paul’s pppd
, but unlike iOS, the IPsec negotiation is not kicked off by the L2TP client, and the L2TP client is not a pppd
plugin. It’s a userspace binary, and both it and the IPsec IKE daemon are called as necessary by the Android framework when the user requests an L2TP VPN session.
The IPsec daemon is called racoon
and the PPTP/L2TP client daemon — unique to Android — is named mtpd
. mtpd
sets up the L2TP or PPTP session, and then it in turn executes pppd
. mtpd
does not detach itself into the background, nor does it ask pppd
to do so either.
If you try to run mtpd
from the shell, you’ll get some usage information back:
# mtpd
Usages:
mtpd interface l2tp <server> <port> <secret> pppd-arguments
mtpd interface pptp <server> <port> pppd-arguments
You can also find some example mtpd
invocations over here, which proved to be extremely useful. (Do note that there were apparently some changes between 2.x and 4.x; mtpd
now for some reason also requires that you specify the network interface that you want to make the connection through.) All this to say that it is entirely possible to invoke mtpd
manually and initiate a raw L2TP connection by doing so.
Armed with this knowledge, I concocted the following shell script to automate the process of connecting to my VPN at work. To use this, if you are running stock AOSP, you will need a busybox binary that supplies you with awk
and a couple of other utilities that are not supplied with the Android userland by default. It assumes you want to send all data over the VPN. If the VPN connection drops, it will automatically restart it again; to quit, you will need to Ctrl-C or kill the process. It will work over WiFi or cellular, but if your particular phone uses a device name other than rmnet_usb0
for the cellular modem then you should also change the CELLULAR
variable to match the actual interface name for your phone. Change the 3 variables at the very top to reflect the VPN access concentrator IP, your username, and your password:
#!/system/bin/sh
ENDPOINT=my.vpn.server.com
USERNAME=username
PASSWORD=password
CELLULAR=rmnet_usb0
IPROUTE=`ip route list exact 0.0.0.0/0`
INTERFACE=`echo $IPROUTE | awk '{print $5}'`
DEFAULTROUTE=`echo $IPROUTE | awk '{print $3}'`
ENDPOINT_IP=`ping -c 1 $ENDPOINT | awk 'NR==1{print $3}' | tr -d \(\):`
ip route del default
if [ "$INTERFACE" != "$CELLULAR" ]; then
ip route add $ENDPOINT_IP/32 dev $INTERFACE via $DEFAULTROUTE
else
ip route add $ENDPOINT_IP/32 dev $INTERFACE
fi
until (false) do
until (mtpd $INTERFACE l2tp $ENDPOINT_IP 1701 '' linkname vpn name $USERNAME password $PASSWORD defaultroute) do
echo VPN dropped...reconnecting.
done
done
This works, but it’s kind of a drag to have to fire up a Terminal and peck out a shell script name every time I want to make a VPN connection. Also, the connection script above deletes the original default route that Android installed so that there is no ambiguity that the one pppd
installs in the routing table is the one that should be used, and the easiest (read: laziest) way to clean up that damage is to reset the network interface, which I do by simply toggling airplane mode off and on after I’m done using the VPN. Which is stupid.
Ultimately, I think the right answer to this problem is to patch up Android so that standalone L2TP is an option again, and this is something that I plan on pursuing. In the meantime, though, I discovered Gscript, which is a great little tool that allows me to create shortcuts to shell commands on my launcher.5 I created two other shell scripts, a VPN-Start and a VPN-Stop; the first one calls my VPN connection script, and the second kills it. I then used Gscript to put shortcuts to those two scripts on my launcher.
I quickly discovered a problem, though: for some reason, I can detach mtpd
from my active Terminal session, quit Terminal, and still have mtpd
running in the background, but if I try to do the same thing from within Gscript, mtpd
and pppd
die the instant the script reaches its end. So I had to come up with a new strategy. My VPN-Start script detaches my VPN connection script and then starts pinging an address on the other side of the VPN. For as long as I want the VPN connection to remain up, I just continue to let that run in the background. When I want to disconnect, I run VPN-Stop, which kills ping
. With the ping no longer running, the VPN-Start script reaches its end, that Gscript instance dies, and takes mtpd
and pppd
with it. Finally, VPN-Stop also toggles airplane mode off and on for me before it ends.6
VPN-Start
---------
#!/system/bin/sh
#I called the VPN script 'workvpn.sh'
workvpn.sh &
ping 8.8.8.8
VPN-Stop
--------
#!/system/bin/sh
killall pppd
killall ping
settings put global airplane_mode_on 1
am broadcast -a android.intent.action.AIRPLANE_MODE --ez state true
sleep 2
settings put global airplane_mode_on 0
am broadcast -a android.intent.action.AIRPLANE_MODE --ez state false
One final note: depending on your provider and whether you are connected to WiFi or the cellular network when you are trying to use your VPN, you may experience DNS issues. In my particular case, I discovered that my provider’s DNS servers do not respond to requests from IP addresses outside of their network. This meant that as soon as the VPN came up, I was unable to resolve any names. Furthermore, the pppd
option usepeerdns
does not work on Android; Android doesn’t consult /etc/resolv.conf
for its nameserver list but instead looks to Android system properties which are settable via setprop
, and undoubtedly pppd
was not updated by the Android folk to be aware of this.
You could add something to this effect to the end of VPN-Start, just before the ping
command:
setprop net.dns1 8.8.8.8
setprop net.dns2 8.8.4.4
…or whatever DNS servers you want to use. If you wanted to get really fancy, you could probably script something that would read the values out of /etc/resolv.conf
and then execute setprop
for each value you come across. Unfortunately, this isn’t foolproof since this will only be executed once, and if you hit a rough patch signal-wise and your connection blips, your DNS settings will most likely end up reverting back to the ones your provider supplies. In my case, since I actually have control over the access concentrator I connect to, I configured it to proxy any incoming UDP port 53 traffic to our DNS servers instead, which means I’m able to avoid having to deal with the issue on the Android/client side of things. Most people connecting to their corporate VPN won’t have the freedom to implement a similar workaround, however.
A Word About Security
As I glossed over near the beginning, this is not a particularly secure solution. By default, pppd
will not attempt to negotiate any security. The security mechanisms built into PPP itself (ECP) are not great, and this is why people started running L2TP over IPsec in the first place. Probably the most secure solution — which, unfortunately, is not saying much — is to use MPPE. MPPE is the encryption mechanism that was developed for PPTP, but since it runs directly on top of PPP, there’s absolutely no reason that you cannot also use it with L2TP, provided that your access concentrator supports it, too. Fortunately, RouterOS treats all types of PPP as equals feature-wise, so it literally just works.
I ran into many problem while trying to use MPPE on the client-side, however:
- iOS just won’t ever try to negotiate MPPE over L2TP, period. I have also not been able to locate any evidence that there is an option I can pass to the OS X L2TP module that will cause it to try, and the
require-mppe
option to pppd
doesn’t seem to do anything, either. So, at least for the time being, using MPPE over L2TP on Apple operating systems appears to be a lost cause.
- Android is trickier. You can easily enable MPPE over L2TP by passing the
require-mppe
parameter to pppd
— just add it at the end, say after defaultroute
. However, for reasons I have been as-yet unable to determine, it is wildly unstable and causes the L2TP connection to drop all the time, especially while it is under load. I found some references to MPPE issues in Android, but I’m not sure they are related to what I was seeing. First, they seem to be old references, and second, as far as I can tell, the problems are restricted to stateful MPPE which pppd
doesn’t try to negotiate by default and which I was not using. Further research indicates that the stateful MPPE problems at least are traceable to bugs in the Linux MPPE implementation.
Even if I were able to coax MPPE into working properly, the reality is that it shouldn’t be considered secure and nobody should rely on it exclusively to protect one from snooping eyes. Much like WEP, it is generally accepted that using it is little better than having no security at all. Unless you are also using secure protocols such as SSL on top of the VPN itself, you should, for all intents and purposes, consider your VPN traffic to be open. So don’t be lulled into a false sense of security even if you can manage to get MPPE working for you in a stable manner. For people for whom security is everything and the sole point of using a VPN connection to begin with, this is going to be a deal-breaker for them, and it is the one area where L2TP+IPsec and even OpenVPN has got everything else beat.
“Getting ‘bout as much attention as a circus on the moon.” — Bruce Hornsby
I would wager that programming is not the only engineering profession this might be true of. (HT: HN)
Update 02/14/2014: The official announcement.
This news caught me completely off-guard. I’m not sure whether Dennis’ departure was announced as part of the Lenovo acquisition and I just missed it, or whether this is a recent development. Either way, it actually gives me pause about my previous thoughts regarding the acquisition. I am concerned that if all of the Google guys start jumping ship that, for better or worse, Motorola is not going to look and act like the same organization that it’s been for the past 2 years.
I think there’s some truth to this, and I need to take some time to digest it. (HT: HN)
In other words, please, Apple: no smartwatch. The day they release one is the day they jump the shark. (HT: Marco)
Kinda/sorts on the same subject, since they both address the what-to-work-on question: The “blah blah blah on steroids” (also HT: Marco).
Ah, okay. That’s so much clearer…
Today, I discovered something quite by accident: a really neat, undocumented1 touch gesture on Android.
On both iOS and the majority of modern Android devices, images and text (web page) zooming can be accomplished in one of two ways:
- Double-tapping.
- Pinch-to-zoom.
They each have their pros and cons, naturally.
Double-tapping is great for being able to execute one-handed zooms, but it doesn’t always do exactly what you want. What it does is almost always completely logical, and if you had stopped to think about it for a second, you might even realize that you should have expected the outcome that you got, but there are times where you want more control over exactly how much you are zooming, and double-tapping doesn’t give you that kind of control.
Pinch-to-zoom does give you that precise control. However, unless you are a freak of nature, you typically need two hands to execute that gesture with any kind of finesse.
Android apparently offers you a third zoom gesture option: the double-tap-and-hold. It’s executed like this: perform a double-tap, but don’t lift up your finger after it goes down for the second tap. Tap once, release, tap a second time, and leave your finger in place on the screen. Now, drag your finger down (as if you were trying to scroll the document up), and instead of scrolling, you’ll be zooming in instead! Dragging your finger up will cause you to zoom out.
This is such a fantastic gesture. It makes working with the phone one-handed so much nicer. Kudos to the genius who came up with this.
People are decidedly not happy about the new “beta” site.
(HT: HN)
…and a response piece: Learn C, Then Learn Computer Science (HN).
Hmm, it turns out that both fundamentals and the theory behind them are important. Who’da thunk it?
I loved this post; so much of it resonates with me, and it ties nicely back into the broader discussion surrounding the Berners-Lee article from yesterday.
Weird.
Yet another article about textual vs. visual programming. Is it a false dichotomy? (HT: /.)
Last week, right after it broke, I linked to the news that Google was divesting itself of Motorola, and selling it to Lenovo. I wasn’t quite sure what to think about it then. Part of me was sad because I saw some promise in the changes Google had been making at Motorola, and I was eager to see how they would play out. Towards the end of ATP episode 50, the guys talked some about this turn of events, and after listening to this and having some time to chew on the issue a bit more, I’ve decided that I’m largely optimistic about this new combination. What follows is a slightly edited version of the thoughts that I sent to the ATP folks.
Lenovo and I have a history. Before I started buying Mac laptops, I was a huge ThinkPad fanboy. I started using ThinkPads back in 1998, when it was still an IBM institution. So I was there as the Lenovo takeover unfolded. I know that many people who were once (and perhaps still are) ThinkPad users have not been happy with some of the changes that came about under the Lenovo regime to what they consider to be “signature traits” of the brand, such as what was at one time thought to be the absolute best keyboard on a mobile computer of any stripe. I had my own gripes as well: the timing of this change may have been purely coincidental, but I personally was not happy when, in 2006, roughly a year after the Lenovo acquisition had been finalized, they stopped shipping workstation-grade laptops with high-resolution (for the time) in-plane switching LCD displays. That’s right: IBM was one of the few manufacturers doing IPS (which they called “FlexView”) on their laptops back in those days. In fact, they first offered that option in 2001, well before Apple started to get religion about it. In comparison, the Apple laptop displays of that era (least-common-denominator twisted nematic technology, sub-100 DPI) were absolutely terrible. I didn’t see how anybody could stand to use those things, much less call them gorgeous.
That period between 2006 and 2012, when the Retina MacBook Pro finally came out…those were dark days.
But as John and Marco both alluded to, Lenovo overall did a decent job of not screwing things up…which in the world of high-technology is relatively high praise. I’ll stick my neck out and say it: if I had to buy a PC these days, it would still be a Lenovo ThinkPad, hands-down. I’ve used some recent machines, and the engineering, build-quality, and durability are still excellent. And I think that the key to the relative success of the Lenovo transition was that it wasn’t just a brand acquisition, or just an IP acquisition, or just a talent acquisition. Lenovo saw the value in all of it, wanted all of it, and kept all of it. (To this day, I believe that Lenovo still contracts with IBM to use their international support infrastructure; if you call Lenovo support in the U.S. today, it still rings the same call center in Atlanta that it always has.) They didn’t conduct massive layoffs or restructurings afterward. As I understand it, they retained all of the U.S. offices where the IBM Personal Systems Group guys were housed as well as all of the people in those offices, and at the end of the day, it was still the same guys making the decisions as before, and the same engineers working their magic as before. So if anybody is actually guilty of tarnishing the ThinkPad brand post-Lenovo, I think it could be successfully argued that the blame should be laid squarely at the feet of the same people who had been running the place when it still said “IBM” above the door, and not Lenovo management. In retrospect, I think that the IBM guys had more influence on Lenovo after being bought than vice-versa. Regardless of how it looked on the balance sheets, it felt in practice more like a bizarre reverse-acquisition, as if IBM spun off the PSG, had them acquire Lenovo, and then assumed their name.
Now, I personally think that Google threw in the towel on Motorola way too early. Everybody in the press was going ga-ga over the Moto X when it came out last summer, and, yeah, it looks like a decent phone, but in my opinion, the most interesting and exciting thing to come out of that strange union was the Moto G, which was only released last December. I was (and still am) very bullish about the Moto G, and was eager to watch this experiment unfold and see whether it ended up paying off for them or not. (If you watch the Moto G announcement, it’s actually pretty hilarious: it basically consists of a bunch of ex-Googlers that had been installed at Motorola getting up and publicly saying, in effect, “look, you idiot hardware partners: THIS is how you make an Android phone. Are you getting this, Samsung? Are you taking notes, HTC?”) A couple of months is simply not enough time in this industry to accurately gauge the success or failure of a strategic course correction, and Google getting rid of Motorola almost feels in a sense like HP and Palm all over again. At least Google is giving somebody else the chance to make a go of it rather than just pulling the plug the way HP did, and if Lenovo can maintain that same kind of “hands-off” attitude with this freshly-Googlified Motorola that they had with the IBM guys1, getting acquired by Lenovo may end up proving to be the best possible outcome for them.
(EDIT: HN link)
“Yes, but…” I think he is focusing on the wrong threat here. The real threat of (metaphorical) balkanization to the web has already touched down, and exists in the form of Facebook, Twitter, et al., even on down to all of those terrible phpBB sites. He touches on this when he mentions that even he is concerned about being “reliant on big companies, and one big server”, but then he goes on to sweep that concern under the rug.
Don’t get me wrong: I, too, am no fan of seeing the internet (not “the web”, but the internet…let’s keep our terms straight here) split up along literal geopolitical lines. (And, quite frankly, I think it’s a miracle that the internet has remained as open as it still is to this day world-wide. It’s open to varying degress, to be sure, but it’s still remarkable.) But I’m also not sure that there is much that can be done to prevent that, aside from vocally condemning the governments of such countries and trying to influence things by setting a better example. They are sovereign entities, after all.
Newsflash: if you don’t have a free society, you won’t have an open internet.
Yum.
I’m not as down on textual coding as the author of this piece is, but I do think he may be on to something. Programming — whether via textual means or something else — absolutely does need to become more accessible to mainstream users. The problem with modern software from a usability perspective is that so much of the user’s experience today is heavily scripted, and for the most part, users are not allowed to deviate from that script, or to tie their own data together in interesting and useful ways without relying on some software developer to make doing so possible first. But even if users could carve their own paths, the ability to do so wouldn’t do them any good if they didn’t know the first thing about how to do it. The challenge is to make a system that makes “programming” feel no different than using pre-canned applications. (HT: HN.)
I stumbled across this essay today while perusing Hacker News. Some good food for thought here. Also, easier said than done…but isn’t that true of anything worth doing?
“Those who don’t know history are destined to repeat it.”
This is a really fun blog. I stumbled across it last year, and am beyond thrilled that it is still getting regular updates, and even receiving some much deserved attention. The curator is apparently a veteran of the industry and seems to know his stuff (further evidence).
The trailer for this heartfelt movie tribute to the 80s is hysterical and extremely well done. I cannot tell you how psyched I am that they not only met but completely blew away their Kickstarter funding goal.
This is a repost of something I wrote for the MikroTik forums a few months back: a procedure I developed for enabling (with some caveats) the MetaRouter feature to work on MikroTik’s dual-core PowerPC RouterBoard models.
MikroTik is a router manufacturer and router operating system software developer. They make good stuff. Linux runs at the core of most of it, but almost everything else on top of the kernel itself is homegrown. I like to think of them as the Apple to Cisco’s (old-guard) “mainframe”-esque product line. The price-to-performance metric on their hardware products is phenomenal. MetaRouter is a virtual hypervisor that MikroTik built into the MIPS and PowerPC versions of RouterOS, but which currently is only officially supported on single-core systems. (x86 RouterOS uses KVM for virtualization support on that hardware platform, and has no such limitation.)
(I felt the heavy-handed disclaimer at the beginning was necessary given that I was posting this to their official support forum, and I didn’t want anybody getting the wrong idea.)
Just to be clear, by proceeding with this, you acknowledge that you either have an RB1100Hx2 or an RB1100AHx2 which is not currently being used in a production capacity and which you are willing to experiment on, and that neither MikroTik nor I can be held liable for any damage you may cause to yourself, your property, or your business by performing this mod. In theory, this is just a software change that should have no permanent impact on the hardware and which is easy to undo; however, there are no guarantees attached to this procedure, and…
…THIS IS NOT OFFICIALLY SUPPORTED BY MIKROTIK. AT ALL. PERIOD. DO NOT CONTACT MIKROTIK SUPPORT ABOUT THIS PROCEDURE, EVER. YOU ARE CHOOSING TO PROCEED AT YOUR OWN RISK!
The reason that MetaROUTER is not supported on the Hx2 or the AHx2 is because they are dual-processor systems, and MetaROUTER does not support multiprocessor systems at this time. So in a nutshell, what we are going to be doing to get MetaROUTER working on these boards is replacing the multicore/SMP Linux kernel with the uniprocessor Linux kernel. When RouterOS is installed on a device, the installer determines which kernel to install depending on the hardware it is being installed to. The uniprocessor kernel should (in theory) boot and run an (A)Hx2 board just fine, and this kernel contains all of the support needed to host MetaROUTER guests. The downside is that by doing this, you will be limited to 1 CPU core while running this kernel, essentially “downgrading” your 1100AHx2 to an 1100AH. If MetaROUTER support is worth more to you than the extra CPU core, then this procedure will give you the freedom to choose to make that sacrifice.
Here are the materials that you will need to assemble beforehand:
- 1 RB1100Hx2 or RB1100AHx2
- 1 other RouterOS router (any kind) to act as a Netboot host
- 1 MicroSD card
- 1 serial cable
- The “upgrade package” NPK for the version of RouterOS you wish to run on the (A)Hx2
- A computer with a functioning Python interpreter installed
- The following files:
And following is a description of how to put the pieces together; I assume here an advanced familiarity on the part of the reader with RouterOS, networking, and Linux/Unix. If any part of these instructions is unclear, let me know and I will try to fill in the gaps for you.
Basically, what we are going to be doing is extracting the 4 kernel files from the PowerPC NPK and replacing the kernel on the 1100AHx2 with the kernel for the 1100AH. Each PowerPC upgrade NPK contains 4 kernels:
- One for RB333/600 (Freescale MPC83xx)
- One for RB1200 (APM/AMCC PPC44x)
- One for RB800/1000/1100/1100AH (Freescale MPC85xx — uniprocessor)
- One for RB1100Hx2/1100AHx2 (Freescale MPC85xx — multiprocessor)
The one that is already on your (A)Hx2 is the fourth one. The one you want is the third one. Unfortunately, it is impossible to know which kernel is which without trying each one, because they are all named the same thing in the NPK (‘kernel’). The supplied Python script will extract each kernel file as it comes across it in the NPK and add a number to the end of each file (kernel1, kernel2, kernel3, kernel4), but the kernels are not always stored in the same order in each RouterOS upgrade NPK. In general, I have found that the uniprocessor kernel for RB1100 is most often the second-largest of all the kernels in terms of file size, while the multiprocessor kernel (the one you’re already running) is generally the largest of the 4. In RouterOS 6.4, the uniprocessor RB1xxx kernel is kernel4, but it could be different for different RouterOS versions. Keep this in mind; you may have to perform a little trial-and-error testing before you find the right kernel.
Here we go, step-by-step:
- It is recommended that you start by performing a clean, fresh Netinstall of RouterOS onto your Hx2/AHx2.
- Boot up the router, and try to create a MetaROUTER guest. You should see the error “not enough resources”. This is expected.
- On the machine that has Python installed, put the dumpnpk-ppc-kernels.py Python script and the “upgrade” NPK into the same directory.
- Run the Python script and supply the path to the NPK as the first argument; for example: “python dumpnpk-ppc-kernels.py ./routeros-powerpc-6.4.npk”
- Verify that the script extracted 4 kernel files from the NPK, located in a subdirectory called ‘boot’, named ‘kernel1’ through ‘kernel4’.
- Copy the ‘kernel’ files onto the MicroSD card. Insert into the SD slot on the (A)Hx2.
- Prepare the other RouterOS router (item #2 in the “materials needed” list; can be of any type, even x86) to be a Netboot host for the (A)Hx2 by doing the following:
- Copy the ‘openwrt-rb1100-linux-2.6.35-initrd.elf’ file to the router.
- Create a DHCP server on this router; set the “Boot File Name” of the DHCP Network to ‘openwrt-rb1100-linux-2.6.35-initrd.elf’
- Enable the TFTP server on RouterOS.
- Hook a serial cable up to the (A)Hx2, and plug ether13 on the (A)Hx2 into the other RouterOS router.
- Power up the (A)Hx2. Interrupt the RouterBOOT boot process when you see “Press any key”.
- Set the (A)Hx2 to boot via ethernet using options o, 1, and then x to exit and resume booting.
- You should eventually see “Please press Enter to activate this console.” Press Enter to get to a shell.
- Mount the SD card: “mount -t vfat /dev/mmcblk0 /mnt”
- Copy the kernel files from the SD card to the RAM disk temporarily: “cp /mnt/kernel? ~”
- Unmount the SD card: “umount /mnt”
- Prepare the boot partition of the RouterBOARD’s NAND for mounting: “ubiattach /dev/ubi_ctrl -m 0”
- Mount the boot partition of the RouterBOARD’s NAND: “mount -t ubifs /dev/ubi0_0 /mnt”
- Pick one of the 4 kernels you extracted from the NPK to replace the kernel on the boot partition, and copy it over on top of the existing kernel: e.g., “cp ~/kernel4 /mnt/kernel”
- Cleanly unmount the boot partition: “umount /mnt”
- Reboot the router: “reboot”
At this point, RouterOS should boot up again. If it DOES NOT, and the router either goes into a reboot cycle or hangs while booting, you probably picked the wrong kernel. Try another one by starting again from step 8, and choosing a different kernel file at step 17.
Once you have found and successfully installed the uniprocessor kernel, you should be able to go to System -> Resources and verify that it only sees 1 CPU core instead of 2. At this point, try creating and booting a MetaROUTER guest again. If all went well, it should work. Congratulations! You can remove the SD card at this point; it does not need to remain in the router after you have finished the procedure.
To undo the change, simply re-Netinstall the RouterOS version of your choice, which will cause the original multiprocessor kernel to be copied back into the boot partition of the NAND.