[Devblog] Compile farm thoughts and ideas

Forum for technical discussions regarding development. If you have a general suggestion, problem or comment, please use one of the other forums.

Moderator: OpenTTD Developers

Post Reply
TrueBrain
OpenTTD Developer
OpenTTD Developer
Posts: 1370
Joined: 31 May 2004 09:21

[Devblog] Compile farm thoughts and ideas

Post by TrueBrain »

Hi all,

For several years we are running our own Compile Farm, with great success. Since the introduction of it, we can produce binaries at a much more steady rate (normally you had to wait between 1 hour and several weeks before flavor N uploaded the release binary), and the quality of these binaries is fully in our control. From my point of view, this has been one huge success.

Over the years the software running the CF has changed. We started out with some jails and cross compiling. These days we use Bamboo, and run several Virtual Machines to create all the binaries. Windows binaries are no longer cross compiled, and we use chroots to create debian/ubuntu deb-packages. The only target still cross compiled is Mac OS X; this is unlikely to change, as we don't have access to Mac hardware. We have come a long way, and the quality has been higher and higher.

What you don't see, is how very annoying it is to maintain the CF these days. (I am sure I wrote another dev-blog about that in the past :P) When a Debian or a Ubuntu create a new "latest", a lot of tasks have to be preformed:

- Create a new chroot
- Upgrade, test, ...
- Create a new "executable" in Bamboo
- Copy an existing job, and change all the settings correctly (this goes wrong so often, as there are many settings :P)
- Test-run on next release
- Pray

This often means we are slow with new targets, and nobody really wants to do it. Besides that, only me and Rubidium can do it, as it requires a lot of know-how of the systems involved.

As you might understand, that has to change.

Additionally, lately there has been the requests if we can compile random patches on the interwebz to publish binaries from. You can see several threads where people publish their patch, and then they have to wait for supporter A to publish a Windows binary, so people can play that patch. I am really happy (and a bit proud) we have people doing that; but from a SysOp point of view, there are a few issues with that. Mostly .. we have a CF, why not use it for these things?

Well, the main reason is: security. The CF at the moment is build on trust. It means that if anyone would start a task on it which .. say .. starts a DDoS from the Makefile, it would do that. Not really something I want to open up for the public in one way or the other. And I don't want the other devs to filter every patch, just because I was incapable of securing the stuff.

Luckily enough, I am not incapable (ghehe), but over time the CF has grown into kinda of a monster, which makes it hard to change things enough to get the above requested security. The whole CF is on a separate network, and isolated from everything .. but because it has to publish its files, and for some projects (Simutrans mainly) it has to download files from other websites .. there are a few .. holes between this network and the main OpenTTD network. Well .. just one big huge hole, which says: forward all network ;) Limiting that network is possible, but I don't fancy breaking stuff over and over till we found what should be open and shouldn't be.


So .. solutions!


Today I have been fiddling with Docker. For a while now I had the idea that could resolve most of our issues, and make maintaining the CF a lot easier. And as it turns out, it is! So next I will talk a bit about how I plan to use Docker. If you have any tips and suggestions, do let me know ;)


So what is Docker? Docker is a containment method. Where Virtual Machines run a complete machine, Docker runs on your own kernel, like a jail/chroot, but a lot more contained. So what do I mean with that? Well, without boring you with all kinds of details, I can prepare a Docker that, when started, compiles stuff at a certain directory, and shuts down when that is done. Even more, I can make him do that without granting him any network access.

This of course is ideal for what we want. Even if horrible DDoS code would be started in such Docker, it wouldn't be able to do anything. The worst thing it can do, is waste CPU cycles till the task is being killed. That is more than acceptable to me.

During testing and installing all of this, I found out some awesome things. In random order:

- Bamboo supports Docker fine.
- Bamboo starts a Docker, mounts a volume, executes a command, destroys the Docker, and fetches some files from the mounted volume
- Docker can disallow network traffic from/to the containers
- Docker supports 32bit containers, although officially they say they don't. It required some fiddling around to get Debian and Ubuntu i386 containers, but in the end it was trivial
- I ended up creating my own base images, to make 32bit vs 64bit more clear
- I can prepare a Docker so you too can compile OpenTTD like the CF would; exactly the same

This opens a few doors. For example, because of the last point, I can easily share Dockers with other people, so they can improve on it. Take Mac OS X, we cross compile that. If I can get this in a Docker (I haven't tried this part, but I am sure it will be successful ;) ), other people can use it too to make test-builds etc.

Also other projects, like Simutrans, which require a few other dependencies, are no longer difficult to maintain.

And behind another door, I can allow any dev to create a project to compile a random patch from the interwebz, without having to worry about security. This means we can create binaries for people with patches in threads (upon request or something most likely), creating a bigger userbase for them.

Sadly, we will still need a Windows VM to make the native Windows builds. But I just bribe Rubidium to maintain that (he is anyway :P).

So what do I have at the moment? Well, let me show you:

Code: Select all

$ docker images
REPOSITORY          TAG                   IMAGE ID            CREATED             VIRTUAL SIZE
openttd-cf          ubuntu-amd64-trusty   08fef76552b7        14 seconds ago      525.9 MB
openttd-cf          debian-amd64-wheezy   3a78f4f338e8        5 minutes ago       440.4 MB
openttd-cf          debian-amd64-jessie   a33aea9789b2        5 minutes ago       564.4 MB
openttd-cf          ubuntu-i386-trusty    a5b2ad73d51e        5 minutes ago       500.4 MB
openttd-cf          debian-i386-jessie    2786a64dd1b9        12 minutes ago      560.5 MB
openttd-cf          debian-i386-wheezy    9508cf0e0a08        12 minutes ago      432.2 MB
ubuntu-i386         trusty                53e2401d7c20        23 minutes ago      177.6 MB
ubuntu-amd64        trusty                8441e92dcb79        24 minutes ago      187.9 MB
debian-i386         jessie                ecc313246f26        26 minutes ago      122.2 MB
debian-i386         wheezy                4391b229012e        28 minutes ago      82.4 MB
debian-amd64        jessie                482859149a4e        31 minutes ago      125.1 MB
debian-amd64        wheezy                17c30a806df6        34 minutes ago      84.89 MB
(nice to know, Docker uses a layer filesystem, so when one image depends on other, only the changes are stored on disk; so the virtual sizes are not the real sizes)

Dockerfile:

Code: Select all

FROM @OS@-@ARCH@:@RELEASE@

MAINTAINER Patric 'TrueBrain' Stout

RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends build-essential debhelper fakeroot libfile-fcntllock-perl
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends libsdl1.2-dev zlib1g-dev liblzo2-dev liblzma-dev libfontconfig-dev libicu-dev
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends subversion git mercurial patch

ADD release.sh /usr/bin/

ENTRYPOINT ["release.sh"]
CMD ["dev"]
release.sh:

Code: Select all

ln -sf os/debian debian && mkdir bundles
fakeroot make -f debian/rules binary
mv ../*dbg*.deb bundles/openttd-${VERSION}-linux-${OS}-${OS_RELEASE}-${ARCH}-dbg.deb
mv ../*.deb bundles/openttd-${VERSION}-linux-${OS}-${OS_RELEASE}-${ARCH}.deb
There are a few more lines of scripts and code to make it all work together, but this is the summary. The rest is scripted together .. I just do: "./create.sh", and everything is being done for me. I love automation.


I am currently very happy with the result. In Bamboo a single command compiles via Docker a whole target. This makes it very trivial to create/add new targets, or update current ones. I am very very happy with it.


So what is next? To put it in a list:

- Update Bamboo to 5.9
- Install a Docker VM
- Deploy my new code there
- Start building debian/ubuntu via Docker
- Start building linux nightly and CI via Docker
- Check in code in our VCS (so everyone can use it, update it, etc)
- Investigate Mac OS X via Docker
- Swicth from Windows 2003 to Windows 10 for Windows VM
- Rewire how the CF publishes its files (it now goes over a few layers, don't get me started :P)
- Create a wiki page explaining the ins and outs of how we use Docker


So yeah, a long way to go before it is all done etc, but the progress has been awesome.


Hope you enjoyed this read. If not, why did you bother? You only have yourself to blame here ;)
The only thing necessary for the triumph of evil is for good men to do nothing.
User avatar
HackaLittleBit
Director
Director
Posts: 550
Joined: 10 Dec 2008 16:08
Location: tile 0x0000

Re: [Devblog] Compile farm thoughts and ideas

Post by HackaLittleBit »

Thanks!

I was just wandering.
A while ago I compiled ottd on raspberry pi 2.
That goes really fast.(quad core)
Faster than my laptop anyhow.

Maybe you could make use of them with only a connection to the internet and stand alone.

Edit: Still thinking a bit more.
You could use a remote terminal to access the pi.
This is how I access my PI here at home.
You avoid various operating system incompatibilities.
People could just test a patch!
This could be automated maybe.

Duno if this is doable?
User avatar
adf88
Chief Executive
Chief Executive
Posts: 644
Joined: 14 Jan 2008 15:51
Location: PL

Re: [Devblog] Compile farm thoughts and ideas

Post by adf88 »

Great to hear these news. There where many times when CF could be useful for me. I just didn't want to bother you and eat your CPU cycles. But if the process was automated, that would be great!

I have a vision of security model that could work here. I'll share the ideas later (I have no time right now).

Anyway, do you consider a fully automated, publicly available solution for compiling custom builds? This could be a web interface like "Please, upload your patch here and I will make a binary for you". Proper hardening could make it doable.
:] don't worry, be happy and checkout my patches
TrueBrain
OpenTTD Developer
OpenTTD Developer
Posts: 1370
Joined: 31 May 2004 09:21

Re: [Devblog] Compile farm thoughts and ideas

Post by TrueBrain »

HackaLittleBit wrote:(..)
Maybe you could make use of them with only a connection to the internet and stand alone.
(..)
I don't really follow; why would we use raspberries?
adf88 wrote:(..)
I have a vision of security model that could work here. I'll share the ideas later (I have no time right now).
(..)
Please do share!

In the initial steps, once this new CF is up and running, it will be up to a dev to feed a patch to the CF to be compiled. In the past we already added some patchpacks to it, and this will then become more common (as in, you give a git location to a dev, he likes what you are doing, he adds it to the CF; every push you do results in a binary). There will be restrictions ofc (max of one a day/week/what-ever), but we will work out the details when we get there. We already have been doing this, but we will be more public about it, and do it more easily (as it becomes easier; at least, that is the plan :P).

Fully automated and public available would require some management software; I most likely won't have any time for that, so that would fully depend if someone is interested in writing that for us. I don't have real issues with it, but we need to think of some good system to prevent abuse. You can think of a karma system for example, that you start of with 100 karma points. Every build you make costs 30 karma points. And you earn karma points for every download done on the file, and every person voting for your build. Just randomly spitting out an idea here, no idea if it holds. But the general idea of it: if you automate it completely, make sure the community can decide what is good and what is bad.

Another thing to consider: find a clean way to only allow OpenTTD related projects to compile. And to ensure there isn't something compiling 24/7 ;)
The only thing necessary for the triumph of evil is for good men to do nothing.
User avatar
HackaLittleBit
Director
Director
Posts: 550
Joined: 10 Dec 2008 16:08
Location: tile 0x0000

Re: [Devblog] Compile farm thoughts and ideas

Post by HackaLittleBit »

You can install full operating system on a sim card.
Making a mirror of that is a peace of cake.
Every-time you do that you format the card.
So every or week you swap the sim-card with fresh updates.
Yes I know, it is not automated.
But very secure.
So not connected to your internal network or very limited.
I was just thinking about security and making testing of provided (user) patches easier.
If a user wants to try out a patch he/she should be logged on.
A button on the web page could start the whole sequence.(uploading ,applying patch, compiling, starting game, and sending instruction to start remote terminal).
That way more people would be involved maybe.

Why a PI 2 : Cheap, fast enough and operating system on SIM card.
SIM card to be formatted before each use to destroy data.
User avatar
JGR
Tycoon
Tycoon
Posts: 2557
Joined: 08 Aug 2005 13:46
Location: Ipswich

Re: [Devblog] Compile farm thoughts and ideas

Post by JGR »

This sounds very interesting. :)

This is pure speculation on my part, but I'd suspect that automatic builds would encourage people to commit and push before checking locally that what they've done builds and works correctly, especially if you add a public "submit your diff using this web form" type interface.
I'm not sure about automated karma, that sounds like it'd encourage people to game the system, and would probably be quite a bit of work to implement. It might be simpler to just kill off or throttle build jobs if they (cumulatively) consume excessive amounts of CPU time.

One other thing which might be an issue is tying the produced binaries back to the source. Will the CF also export the git tree, diff files, or a source bundle? If a dev submits a build job pointing at a remote git tree, and then that git tree goes down or a dev later deletes/force pushes over the branch pointer, could the binaries become orphaned?
HackaLittleBit wrote:Why a PI 2...
Raspberry Pis and similar embedded boards are slow and limited devices, to put it mildly, and administering a cluster of them would be a lot of work for no benefit.
Whilst I have happily used them at work and home for a number of (low computational requirements) tasks, I've had a number of unpleasant failure modes which would be very awkward to fix remotely.
You can do a lot more work with a much higher reliability and lower cost per unit computation with standard server(s).
Ex TTDPatch Coder
Patch Pack, Github
TrueBrain
OpenTTD Developer
OpenTTD Developer
Posts: 1370
Joined: 31 May 2004 09:21

Re: [Devblog] Compile farm thoughts and ideas

Post by TrueBrain »

Yeah, I am totally missing what HackaLittleBit is trying to say. It reads to me: we should use N because we can!! With no real benefit. Anyway, CPU etc we have sufficient, that is not any limiting factor ;)

Tnx for the feedback JGR; for sure we will have to find figure out how to do it correctly .. or we might just be over-engineering this all, and things lead themself ;)

Currently all binaries the CF produces, also produces a 'source' bundle. This will always stay like this, as it is our way of saying: this binary was made with this source. No matter what the externals do after that :) ('fun' fact: we have source tarballs of every nightly ever created .. up to r1 of the current subversion ;) :D)
The only thing necessary for the triumph of evil is for good men to do nothing.
User avatar
burty
Transport Coordinator
Transport Coordinator
Posts: 326
Joined: 16 Jun 2006 17:18
Location: Somwhere near a computer

Re: [Devblog] Compile farm thoughts and ideas

Post by burty »

I did enjoy reading the topic :) quite informative.

I think that some form of management interface to allow a more "open" approach to builds would greatly aide people who have patchpacks with a lot of users who enjoy said patchpacks. As you said the problem becomes 2 fold
1) How do we limit it to ONLY OpenTTD
2) How do we stop people from abusing it (this would need to be some form of trust system and while the karma idea wouldn't work on the surface it may be along the right lines)

I would personally enjoy the challenge of trying to create something for the public management section but until we can come up with a concrete (ish) solution to 2 I don't think we can do this feasibly on a self run method.
TrueBrain
OpenTTD Developer
OpenTTD Developer
Posts: 1370
Joined: 31 May 2004 09:21

Re: [Devblog] Compile farm thoughts and ideas

Post by TrueBrain »

Just s a small update:

Today I managed to produce OSX binaries (10.6-10.7 i386 version, and a 10.8+ x86_64 version). So I am very happy with the progress ;) The OSX images support some basic version of "macports", making it a lot easier to update dependencies etc. *big smile*. Big thanks go to https://github.com/tpoechtrager/osxcross, which "just works", and allows us to cross compile OSX from Linux!

For those that are interesting in the work so far, feel free to browse around in:

https://devs.openttd.org/~truebrain/docker-cf/

In 'base' there is information how to make the debian and ubuntu base images. After that, you can create the openttd-cf specific images. Then it is just a matter of doing:

docker run -v /home/user/openttd-source:/source openttd-cf:<image>

And it will produce a bundles/openttd-*dev* file for you, which contains for that target the bundle the CF will be creating.

Any suggestions/ideas/patches/etc are very much welcome.

Owh, and a while back I installed a dummy Bamboo which managed to produce the binaries via Docker just fine \o/
The only thing necessary for the triumph of evil is for good men to do nothing.
Kogut
Tycoon
Tycoon
Posts: 2493
Joined: 26 Aug 2009 06:33
Location: Poland

Re: [Devblog] Compile farm thoughts and ideas

Post by Kogut »

Thanks for sharing this! I also see that I really should play with Docker, it seems to be a really useful tool.
Correct me If I am wrong - PM me if my English is bad
AIAI - AI for OpenTTD
Post Reply

Return to “OpenTTD Development”

Who is online

Users browsing this forum: No registered users and 18 guests