FLOSS Project Planets

Thorsten Glaser: Sorry about the MediaWiki-related breakage in wheezy

Planet Debian - Tue, 2014-04-01 08:24

I would like to publicly apologise for the inconvenience caused by my recent updates to the mediawiki and mediawiki-extensions source packages in Debian wheezy (stable-security).

As for reasons… I’m doing Mediawiki-related work at my dayjob, as part of FusionForge/Evolvis development, and try to upstream as much as I can. Our production environment is a Debian wheezy-based system with a selection of newer packages, including MediaWiki from sid (although I also have a test system running sid, so my uploads to Debian are generally better tested). I haven’t had experience with stable-security uploads before, and made small oversights (and did not run the full series of tests on the “final”, permitted-to-upload, version, only beforehand) which led to the problems. The situation was a bit complicated by the need to update the two packages in lockstep, to fight an RC bug file/symlink conflict, which was hard enough to do in sid already, plus the desire to fix other possibly-RC bugs at the same time. I also got no external review, although I cannot blame anyone since I never asked anyone explicitly, so I accept this as my fault.

The issues with the updates are:

  • mediawiki 1.19.5-1+deb7u1 (the previous stable-security update) was not made by me but by Jonathan Wiltshire
  • mediawiki 1.19.11+dfsg-0+deb7u1 (made by me) was fine, fixed the bugs it was supposed to, but was delayed after being uploaded to security-master-unembargoed
  • mediawiki 1.19.14+dfsg-0+deb7u1 was supposed to be a mostly upstream update, but I decided to add changes to fix issues pointed out by lintian (not trivial ones), and mistakenly forgot to remove two lines that should not have crept in from sid
  • mediawiki 1.19.14+dfsg-0+deb7u2 was quickly uploaded to fix this issue but took about half a day to be ACCEPTed
  • mediawiki-extensions 3.5~deb7u1 should have be named 2.12 but could not, due to the aforementioned lockstep update requirement and version checks in maintainer scripts; it fixes the issues but does not add other changes from 3.5 in sid… unfortunately, the packaging uses cdbs (which I dislike quite a lot, but as the newcomer in the team I decided to accept it and go on; changing the existing packaging would be quite some effort anyway) and wants debian/control to be regenerated from control.in… which I thought I had done, and normally do…
  • mediawiki-extensions 3.6 (in sid) fixes another dir/symlink conflict shown up after 3.5 was made. I’ve requested upload permission for regenerating debian/control and asked whether I am allowed to include this fix as well

My unfamiliarity with some of the packaging concepts used here, combined with this being something I do during $dayjob (which can limit the time I can invest, although I’m doing much more work on Mediawiki in Debian than I thought I could do on the job), can cause some of those oversights. I guess I also should install a vanilla wheezy environment somewhere for testing… I do not normally do stable uploads (jmw did them before), so I was not prepared for that.

And, while here: thanks to the Debian Security Team for putting up with me (also in this week’s FusionForge issue), and thanks to Mediawiki upstream for agreeing to support releases shipped in Debian stable for longer support, so we can easily do stable-security updates.

Categories: FLOSS Project Planets

Drupalize.Me: Questionnaire Confirms: Designers Aren't Clear How To Help Drupal 8

Planet Drupal - Tue, 2014-04-01 08:15

As mentioned in my previous blog post "Catching the Community Train" Lisa, Bojhan and myself will be working on a website to better facilitate the process of designers contributing to Drupal. Following my last blog post we sent out a questionnaire to current and previous contributors in order to gain some valuable insights that will help us move forward. In this blog post I will analyze the answers we received and share what they mean to me and how they will be instrumental in our success of helping designers more easily get involved in Drupal.

Categories: FLOSS Project Planets

Michal Čihař: New SSL certificates

Planet Debian - Tue, 2014-04-01 08:14

Today, I've replaced server SSL certificates with new ones issues by GlobalSign. These should not suffer of same trust problems as CACert one used so far (especially after CACert root certificate being removed from Debian).

While doing this, I had to use SNI on server to be able to decide which SSL certificate it should use. This should work for any decent browser, but I guess your scripts might have problems, but I hope this will be rare. Anyway if you will face some issues because of this, please let me know.

Other than that I've also tweaked SSL setup to follow current best practice, what could also cause troubles to some ancient clients, but I hope these are non existing in this case :-). See Qualys SSL report for more details.

Anyway thanks to GlobalSign free SSL certificates for open source projects you can use hosted Weblate without any SSL warnings.

PS: Similar change (just without SNI) has happened last week on phpMyAdmin web servers as well.

Filed under: English phpMyAdmin Weblate | 2 comments | Flattr this!

Categories: FLOSS Project Planets

Raphael Geissert: Rant: no more squeeze LTS

Planet Debian - Tue, 2014-04-01 08:00
Following my blog post about Long Term Support for Debian squeeze, the news was picked up by Slashdot, Reddit (again), Barrapunto, Twitter, and Phoronix (in spite of their skepticism).

Over 300 persons, some representing their companies, contacted the security team since the news about the LTS came out - it all seemed like things were finally rrolling.

However, a few days before the coordination mailing list was setup, a not-so-friendly mail was received from a legal officer of a company that produces a RPM derivative with Long Term Support and paid support contracts. The company-that-can't-be-named from here on, due to trademark abuse.

Long story short: the Debian Squeeze LTS project has been boycotted and threatened. Unfair competition (antitrust law) has been brought up against the project, among other threats.

So, great move company-that-can't-be-named, you got it - there won't be LTS, it's been decided and the interested parties have been notified. Perhaps you want to take over the actual development?
Categories: FLOSS Project Planets

Daniel Greenfeld: Two Scoops of Goblins

Planet Python - Tue, 2014-04-01 08:00

While Audrey Roy Greenfeld and I were contemplating our next Two Scoops Press book topic, it came down to a decision between Pyramid, Flask, and mythical creatures. Inspired by Django's magical flying pony, Pyramid's scary alien dude, and even the idea of a magical Flask pouring out wonderful projects, we've decided to go with mythical creatures.

Specifically, we're writing about goblins, hence the title of this blog post.

What that means is that the next book we publish will be fantasy. Going forward all the books we write will be fiction.

If we ever write a new Two Scoops of Django book, it will be a fantasy about a magical flying pony who eats ice cream. That way we'll confuse the already muddled Amazon.com search requests for 'Django' even more.

Since this is a technical blog, I'll be moving my fiction writing based articles to my author blog at dannygreenfeld.com.

Categories: FLOSS Project Planets

Russell Coker: Comparing Telcos Again

Planet Debian - Tue, 2014-04-01 07:44

Late last year I compared the prices of mobile providers after Aldi started getting greedy [1]. Now Aldi have dramatically changed their offerings [2] so at least some of the phones I manage have to be switched to another provider.

There are three types of use that are of interest to me. One is for significant use, that means hours of calls per month, lots of SMS, and at least 2G of data transfer. Another is for very light use, maybe a few minutes of calls per month where the aim is to have the lowest annual price for an almost unused phone. The third is somewhere in between – and being able to easily switch between plans for moderate and significant use is a major benefit.

Firstly please note that I have no plans to try and compare all telcos, I’ll only compare ones that seem to have good offers. Ones with excessive penalty clauses or other potential traps are excluded.

Sensible Plans

The following table has the minimum costs for plans where the amount paid counts as credit for calls and data, this makes it easy to compare those plans.

Plan Cost per min or SMS Data Minimum cost AmaySIM As You Go [3] $0.12 $0.05/meg, $19.90 for 2.5G in 30 days, $99.90 for 10G in 365days $10 per 90 days AmaySIM Flexi [4] $0.09 500M included, free calls to other AmaySIM users, $19.90 for 2.5G in 30 days, $99.90 for 10G in 365days $19.90 per 30 days Aldi pre-paid [5] $0.12 $0.05/meg, $30 for 3G in 30 days $15 per 365 days

Amaysim has a $39.90 “Unlimited” plan which doesn’t have any specific limits on the number of calls and SMS (unlike Aldi “Unlimited”) [6], that plan also offers 4G of data per month. The only down-side is that changing between plans is difficult enough to discourage people from doing so, but if you use your phone a lot every month then this would be OK. AmaySIM uses the Optus network.

Lebara has a $29.90 “National Unlimited” plan that offers unlimited calls and SMS and 2G of data [7]. The Lebara web site doesn’t seem to include details such as how long pre-paid credit lasts, the lack of such detail doesn’t give me confidence in their service. Lebara uses the Vodafone network which used to have significant problems, hopefully they fixed it. My lack of confidence in the Vodafone network and in Lebara’s operations makes me inclined to avoid them.

Obscure Plans

Telechoice has a $28 per month “i28″ plan that offers unlimited SMS, $650 of calls (which can be international) at a rate of over $1 per minute, unlimited SMS, unlimited calls to other Telechoice customers, and 2G of data [8]. According to the Whirlpool forum they use the Telstra network although the TeleChoice web site doesn’t state this (one of many failings of a horrible site).

The TeleChoice Global Liberty Starter plan costs $20 per month and includes unlimited calls to other TeleChoice customers, unlimited SMS, $500 of calls at a rate of over $1 per minute, and 1G of data [9].

Which One to Choose

For my relatives who only rarely use their phones the best options are the AmaySIM “As You Go” [3] plan which costs $40 per 360 days and the Aldi prepaid which costs $15 per year. Those relatives are already on Aldi and it seems that the best option for them is to keep using it.

My wife typically uses slightly less than 1G of data per month and makes about 25 minutes of calls and SMS. For her use the best option is the AmaySIM “As You Go” [3] plan which will cost her about $4 in calls per month and $99.90 for 10G of data which will last 10 months. That will average out to about $13 per month. It could end up being a bit less because the 10G of data that can be used in a year gives an incentive to reduce data use while previously with Aldi she had no reason to use less than 2G of data per month. Her average cost will be $11.30 per month if she can make 10G of data last a year. The TeleChoice “Global Liberty Starter” [9] plan is also appealing, but it is a little more expensive at $20 per month, it would be good value for someone who averages more than 83 minutes per month and also uses almost 1G of data.

Some of my relatives use significantly less than 1G of data per month. For someone who uses less than 166MB of billable data per month then the Aldi pre-paid rate of $0.05 per meg [5] is the best, but with a modern phone that does so many things in the background and a plan that rounds up data use it seems almost impossible to be billed for less than 300MB/month. Even when you tell the phone not to use any mobile data some phones still do, on a Nexus 4 and a Nexus 5 I’ve found that the only way to prevent being billed for 3G data transfer is to delete the APN from the phone’s configuration. So it seems that the AmaySIM “As You Go” [3] plan with a 10G annual data pack is the best option.

One of my relatives needs less than 1G of data per month and not many calls, but needs to be on the Telstra network because their holiday home is out of range of Optus. For them the TeleChoice Global Liberty Starter [9] plan seems best.

I have been averaging a bit less than 2G of data transfer per month. If I use the AmaySIM “As You Go” [3] plan with the 10G data packs then I would probably average about $18 worth of data per month. If I could keep my average number of phone calls below $10 (83 minutes) then that would be the cheapest option. However I sometimes spend longer than that on the phone (one client with a difficult problem can involve an hour on the phone). So the TeleChoice i28 plan looks like the best option for me, it gives $650 of calls at a rate of $0.97 per minute + $0.40 connection (that’s $58.60 for a hour long call – I can do 11 of those calls in a month) and 2G of data. The Telstra coverage is an advantage for TeleChoice, I can run my phone as a Wifi access point so my wife can use the Internet when we are out of Optus range.

Please let me know if there are any good Australian telcos you think I’ve missed or if there are any problems with the above telcos that I’m not aware of.

Related posts:

  1. Aldi Changes, Cheap Telcos, and Estimating Costs I’ve been using Aldi as my mobile phone provider for...
  2. Aldi Deserves an Award for Misleading Email Aldi Mobile has made a significant change to their offerings....
  3. Dual SIM Phones vs Amaysim vs Contract for Mobile Phones Currently Dick Smith is offering two dual-SIM mobile phones for...
Categories: FLOSS Project Planets

Gergely Nagy: Motif on Wayland

Planet Debian - Tue, 2014-04-01 07:40

Earlier this year, on the fourth of February, Matthew Garrett posted an interesting tweet. The idea of porting Motif to Wayland sounded quite insane, which is right up my alley, so I've been pondering and preparing since. The result of that preparation is a fundraiser campaign, which, if successful, I'll dive deeper into the porting effort, to deliver a library that brings the Motif we used to love back in the days to a modern platform.

The aim of the project is to create a library, available under the GNU Lesser General Public License (version 2.1 or later, the same license original Motif is under), ported to Wayland, with full API compatibility if at all possible. In the end, we want the result to feel like Motif, to look like Motif, so that any program that can be compiled against Motif, will also work with the ported library. I will start fresh, using modern tools and modern methodologies (including, but not limited to autotools and test-driven development, on GitHub) to develop the new library, instead of changing the existing code base. Whether the goal is fully achievable remains to be seen, but the API - and the look of the widgets, of course - will feel like Motif, even if in a Wayland context, and we will do our best to either make the API 100% compatible with Motif, or provide a compatibility library.

Since I have a day job, in order to be able to spend enough time on the library, I will need funding that more or less matches my salary. The more raised, the more time will be allocated to the porting project. Would we exceed the funding goal, there are a few stretch goal ideas that can be added later (such as porting the Motif Window Manager, and turning it into a Wayland compositor, with a few modern bells and whistles).

For further information, such as perks, updates and all, please see the campaign, or the project website!

Categories: FLOSS Project Planets

Logilab: Code_Aster back in Debian unstable

Planet Python - Tue, 2014-04-01 07:40

Last week, a new release of Code_Aster entered Debian unstable. Code_Aster is a finite element solver for partial differential equations in mechanics, mainly developed by EDF R&D (Électricité de France). It is arguably one of the most feature complete free software available in this domain.

Aster has been in Debian since 2012 thanks to the work of debian-science team. Yet it has always been somehow a problematic package with a couple of persistent Release Critical (RC) bugs (FTBFS, instalability issues) and actually never entered a stable release of Debian.

Logilab has committed to improving Code_Aster for a long time in various areas, notably through the LibAster friendly fork, which aims at turning the monolithic Aster into a library, usable from Python.

Recently, the EDF R&D team in charge of the development of Code_Aster took several major decisions, including:

  • the move to Bitbucket forge as a sign of community opening (following the path opened by LibAster that imported the code of Code_Aster into a Mercurial repository) and,
  • the change of build system from a custom makefile-style architecture to a fine-grained Waf system (taken from that of LibAster).

The latter obviously led to significant changes on the Debian packaging side, most of which going into a sane direction: the debian/rules file slimed down from 239 lines to 51 and a bunch of tricky install-step manipulations were dropped leading to something much simpler and closer to upstream (see #731211 for details). From upstream perspective, this re-packaging effort based on the new build-system may be the opportunity to update the installation scheme (in particular by declaring the Python library as private).

Clearly, there's still room for improvements on both side (like building with the new metis library, shipping several versions of Aster stable/testing, MPI/serial). All in all, this is good for both Debian users and upstream developers. At Logilab, we hope that this effort will consolidate our collaboration with EDF R&D.

Categories: FLOSS Project Planets

Drupal Easy: DrupalEasy Podcast 126: Where is Yugoslavia? (Théodore Biadala)

Planet Drupal - Tue, 2014-04-01 06:40
Download Podcast 126

Théodore Biadala (nod_), technical consultant with Acquia, and one of the Drupal 8 JavaScript maintainers, joins Andrew, Ted, and Mike to talk about the current (and future) state of JavaScript in Drupal core. Also discussed is Acquia’s new certification program, DrupalCon Latin America, and picks of the week that you’re not going to want to miss!

read more

Categories: FLOSS Project Planets

Microsoft's going to release source code of Windows Phone

Planet KDE - Tue, 2014-04-01 06:33

As you all may know, Windows 8 Phone (codename Apollo) has not been the great "Android replacer" that Microsoft hoped. Developers do not like this OS, and neither users: too much complex, too slow. Given that it's not worth spending resources on Windows Phone 8, Redmond decided that the support for this system will end on July 2014.
So, trying to go for broke, Microsoft decided to publicly release the source code. The day of the public release is not yet decided: the code needs to be cleaned and organized first, so it will happen probably after July. Redmond hopes to attract, in particular, people who are disappointed by the NSA scandals.
Microsoft declared that all the source code of Windows Phone will be released, including the NT kernel.
Obiously, the NT kernel used for Windows Phone is not exactly the same that runs in personal computer's Windows editions.The Phone NT kernel is called, by codename, "April fool".
By the way, if you are really looking for a "open source Windows" you may choose ReactOS. And if you want free software (not only open source), you should think about trying GNU/Linux distros like Debian (or Ubuntu, but it's not properly free software "out of the box" these days).

Categories: FLOSS Project Planets

Bits from Debian: Debian Project elects Javier Merino Cacho as Project Leader

Planet Debian - Tue, 2014-04-01 06:25

This post was an April Fools' Day joke.

In accordance with its constitution, the Debian Project has just elected Javier Merino Cacho as Debian Project Leader. More than 80% of voters put him as their first choice (or equal first) on their ballot papers.

Javier's large majority over his opponents shows how his inspiring vision for the future of the Debian project is largely shared by the other developers. Lucas Nussbaum and Neil McGovern also gained a lot of support from Debian project members, both coming many votes ahead of the None of the above ballot choice.

Javier has been a Debian Developer since February 2012 and, among other packages, works on keeping the mercurial package under control, as mercury is very poisonous for trouts.

After it was announced that he had won this year's election, Javier said: I'm flattered by the trust that Debian members have put in me. One of the main points in my platform is to remove the "Debian is old and boring" image. In order to change that, my first action as DPL is to encourage all Debian Project Members to wear a clown red nose in public.

Among others, the main points from his platform are mainly related to improve the communication style in mailing lists through an innovative filter called aponygisator, to make Debian less "old and boring", as well as solve technical issues among developers with barehanded fights. Betting on the fights will be not only allowed but encouraged for fundraising reasons.

Javier also contemplated the use of misleading talk titles such as The use of cannabis in contemporary ages: a practical approach and Real Madrid vs Barcelona to lure new users and contributors to Debian events.

Javier's platform was collaboratively written by a team of communication experts and high profile Debian contributors during the last DebConf. It has since evolved thanks to the help of many other contributors.

Categories: FLOSS Project Planets

Petter Reinholdtsen: ReactOS Windows clone - nice free software

Planet Debian - Tue, 2014-04-01 06:10

Microsoft have announced that Windows XP reaches its end of life 2014-04-08, in 7 days. But there are heaps of machines still running Windows XP, and depending on Windows XP to run their applications, and upgrading will be expensive, both when it comes to money and when it comes to the amount of effort needed to migrate from Windows XP to a new operating system. Some obvious options (buy new a Windows machine, buy a MacOSX machine, install Linux on the existing machine) are already well known and covered elsewhere. Most of them involve leaving the user applications installed on Windows XP behind and trying out replacements or updated versions. In this blog post I want to mention one strange bird that allow people to keep the hardware and the existing Windows XP applications and run them on a free software operating system that is Windows XP compatible.

ReactOS is a free software operating system (GNU GPL licensed) working on providing a operating system that is binary compatible with Windows, able to run windows programs directly and to use Windows drivers for hardware directly. The project goal is for Windows user to keep their existing machines, drivers and software, and gain the advantages from user a operating system without usage limitations caused by non-free licensing. It is a Windows clone running directly on the hardware, so quite different from the approach taken by the Wine project, which make it possible to run Windows binaries on Linux.

The ReactOS project share code with the Wine project, so most shared libraries available on Windows are already implemented already. There is also a software manager like the one we are used to on Linux, allowing the user to install free software applications with a simple click directly from the Internet. Check out the screen shots on the project web site for an idea what it look like (it looks just like Windows before metro).

I do not use ReactOS myself, preferring Linux and Unix like operating systems. I've tested it, and it work fine in a virt-manager virtual machine. The browser, minesweeper, notepad etc is working fine as far as I can tell. Unfortunately, my main test application is the software included on a CD with the Lego Mindstorms NXT, which seem to install just fine from CD but fail to leave any binaries on the disk after the installation. So no luck with that test software. No idea why, but hope someone else figure out and fix the problem. I've tried the ReactOS Live ISO on a physical machine, and it seemed to work just fine. If you like Windows and want to keep running your old Windows binaries, check it out by downloading the installation CD, the live CD or the preinstalled virtual machine image.

Categories: FLOSS Project Planets

Wunderkraut blog: Taking the Acquia Certified Drupal Developer exam

Planet Drupal - Tue, 2014-04-01 04:59

In the very recent period this new thing popped up in the Drupal community that has everybody talking: the Acquia certification for Drupal developers. I'm writing this article minutes after actually taking this exam to share with you my impressions.

Why did I take the certification?

One of the things that prompted me to it was Angie "webchick" Byron's article about her experience taking the exam. It sounded interesting but also relevant to me as a Drupal developer. It announced an exam that could determine my value in this field.

A second aspect I'd like to mention is the fact that I work with Drupal but do not have an IT background. I am, what you call, a self-taught. Therefore the idea of having a certificate to prove my worth sounded good to me. So I took the opportunity and the exam that came with it.

Impressions

There are 2 ways you can take the exam: on site (physically) or online. To deliver its online exam, Acquia collaborates with Web Assessor, a secured testing environment.

I chose the second option which meant I had to install some software onto my computer and go through a substantial verification process. This included biometric baseline recording of the face and keystrokes in order to be able to authenticate when I start taking the test. I know, advanced stuff.

To a certain extent I understand the need for this highly secure exam taking environment that prevents people from cheating. However, Web Assessor could have made things easier for people to take these necessary steps. What do I mean by this?

For one, there are contradictory instructions on the site. In one place it says you don't need an external webcam and microphone and in another it says you are not allowed with the built in computer ones. So which one is it? In the end, I did it with my internal ones, so it is possible.

I finally got through the hurdles of installing the software, closing all my apps, creating the biometric baseline, etc to arrive to the booking of the exam. It was very flexible and I could book a time in the same day: that's what I did. I liked that very much. However, I had to wonder about the timezone, just select an hour and hope it corresponded to my timezone. There was no indication as to which one was being used. Luckily, it was the right one so there was no problem. Therefore, in case you are wondering, it will be the timezone you are in when you book.

The whole process of preparing for the exam with Web Assessor took about an hour. Not so much the settings themselves but reading and understanding what I have to do, what I can do and what I can't do.

But what about the test?

I'm going to go right out and say it: the test was hard. But I was expecting it to be hard because otherwise it's pointless. It had only multiple choice questions with only one correct choice most of the time. For the others, you have checkboxes instead of radios.

Timewise, I had 90 minutes which for me was enough. I even got a chance to review some questions to change the answers and submitted the exam with some minutes to spare. And I appreciated the option to flag questions I'd like to review later.

I can't really go into what questions I got or how they were formulated but they were well balanced with regards to the domains covered by the exam.

One problem I had though was with the code formatting. Some of the questions contained code snippets that were a bit tricky to read / understand. I believe a bit more effort can be dedicated to making them more readable - especially when they are in the available choices. I recommend therefore, if possible, putting all code snippets in code blocks and properly spacing them.

I submitted the test and immediately got my result. Passed. It gave me a very good feeling and made me happy to take it. One thing I was disappointed with was the fact that I couldn't see which questions I got wrong. This may be just me but I was left a victim to obsession over which were those battleground questions that made me think so much. But anywho, we move on and develop some more Drupal sites.

Congrats Acquia on this great new initiative!

Categories: FLOSS Project Planets

Understanding the kill command, and how to terminate processes in Linux

LinuxPlanet - Tue, 2014-04-01 04:55

One of my biggest pet peeves as a Linux sysadmin is when I see users, or even other sysadmins using kill -9 on the first attempt to terminate a process. The reason this bugs me so much is because it shows either a lack of understanding of the kill command or just plain laziness. Rather than going on a long rant about why this is bad, I wanted to write an article about the kill command and how signal works in Linux.

Using the kill command

In Linux and Unix when you want to stop a running process you can use the kill command via the command line interface. The kill command in it's most basic form is pretty simple to work with, if you want to terminate a process you simply need to know the processes id number.

Finding the PID of a running process

To find the process id or PID of a running process we will use the ps command. This command will list running processes and some information about those processes. The ps command has many options and many methods of showing processes; I could dedicate an article just to ps. For this example, I am just going to use the ps command with the -C flag, this flag can be used to lookup a process by the name of the command thats being run.

Syntax:

# ps -C <command>

Example:

# ps -C nginx PID TTY TIME CMD 566 ? 00:00:00 nginx 567 ? 00:00:00 nginx 568 ? 00:00:06 nginx 570 ? 00:00:06 nginx 571 ? 00:00:06 nginx

In the above example I am using the ps command to search for nginx processes. If you look at the output you will see that the PID for each process is listed in the first column. We will use these numbers to kill the nginx processes.

Killing a process with kill

Now that we have found the PID of the process we want to stop, we can use the kill command to terminate the process.

Syntax:

# kill <pid>

Example:

root@blog:/# ps -C nginx PID TTY TIME CMD 566 ? 00:00:00 nginx 567 ? 00:00:00 nginx 568 ? 00:00:08 nginx 570 ? 00:00:09 nginx 571 ? 00:00:08 nginx root@blog:/# kill 571 root@blog:/# ps -C nginx PID TTY TIME CMD 566 ? 00:00:00 nginx 567 ? 00:00:00 nginx 568 ? 00:00:08 nginx 570 ? 00:00:09 nginx 8347 ? 00:00:00 nginx

As you can see in the example by running kill with the 571 PID it stopped the nginx process with a process id of 571. Now it is good to note that another nginx process took the place of the process I killed, this is because I killed a worker process for nginx. In order to stop nginx completely I would need to kill the master nginx process.

Using signals with kill

A somewhat common (though if it happens to you a lot, than that may be sign that something is wrong) issue is when you run kill <pid> on a process and the process does not terminate. This can happen for many reasons but what can you do in those scenarios? Well a common response is to use the kill command with the -9 flag.

Example:

root@blog:/# ps -C nginx PID TTY TIME CMD 566 ? 00:00:00 nginx 567 ? 00:00:00 nginx 568 ? 00:00:09 nginx 570 ? 00:00:09 nginx 8347 ? 00:00:00 nginx root@blog:/# kill -9 570 root@blog:/# ps -C nginx PID TTY TIME CMD 566 ? 00:00:00 nginx 567 ? 00:00:00 nginx 568 ? 00:00:09 nginx 8347 ? 00:00:00 nginx 8564 ? 00:00:00 nginx

So why does -9 work? Well when the kill command is run it is actually sending a singal to the process. By default the kill command will send a SIGTERM signal to the specified process.

The SIGTERM signal tells the process that it should perform it's shutdown proceedures to terminate the process cleanly by closing all log files, connections, etc. The below example is a excerpt of a python application, this snippet of code enables the python application to capture the SIGTERM signal and perform the actions in the killhandle function.

Signal handling in Python:

def killhandle(signum, frame): ''' This will close connections cleanly ''' line = "SIGTERM detected, shutting down" syslog.syslog(syslog.LOG_INFO, line) rdb_server.close() syslog.closelog() sys.exit(0) signal.signal(signal.SIGTERM, killhandle)

In the above code example the process is able to close both it's database connection and connection to rsyslog cleanly before exiting. In general it is a good idea for applications to close open file handles and external connections during shutdown, however sometimes these processes can either take a long time or due to other issues not happen at all. Leaving the process in a state where it is not correctly running but also not terminated.

When a process is in a limbo state it is reasonable to send the process the SIGKILL signal, which can be invoked by running the kill command with the -9 flag. Unlike SIGTERM the SIGKILL signal cannot be captured by the process and thus it cannot be ignored. The SIGKILL signal is handled outside of the process completely, and is used to stop the process immediately. The problem with using SIGKILL is that it does not allow an application to close its open files or database connections cleanly and over time could cause other issues; therefor it is generally better to reserve the SIGKILL signal as a last resort.

Signal Numbers and Dispositions

Each signal has a numeric Value and an Action associated to it, the numeric values can be used with commands such as kill to define which signal is sent to the process. Each signal also has an "action" or "disposition" associated with it which defines what type of action this signal should invoke.

Signal Actions

While there are several actions for the various signals on a Linux system, I want to highlight the below as they are the most commonly used signals from a process termination perspective.

  • Term - This action is used to signal that the process should terminate
  • Core - This action is used to signal that the process should core dump and then terminate
Common Signals

Below is a list of a few common signals, the numeric value of that signal, the action that is associated with it and how to send that signal to a process. This list, while not complete, should cover general usage of the kill command.

SIGHUP - 1 - Term

  • The SIGHUP signal is commonly used to tell a process to shutdown and restart, this signal can be caught and ignored by a process.

Syntax:

# kill -1 <pid> # kill -HUP <pid> # kill -SIGHUP <pid>

SIGINT - 2 - Term

  • The SIGINT signal is commonly used when a user presses ctrl+c on the keyboard.

Syntax:

# kill -2 <pid> # kill -INT # kill -SIGINT

SIGQUIT - 3 - Core

  • The SIGQUIT signal is useful for stopping a process and telling it to create a core dump file. The core file can be useful for debugging applications but keep in mind your system needs to be setup to allow the creation of core files.

Syntax:

# kill -3 <pid> # kill -QUIT <pid> # kill -SIGQUIT <pid>

SIGKILL - 9 - Term

  • The SIGKILL signal cannot be ignored by a process and the termination is handled outside of the process itself. This signal is useful for when an application has stopped responding or will not terminate after being given the SIGTERM command. This signal should stop more processes however there are exceptions, such as zombie processes.

Syntax:

# kill -9 <pid> # kill -KILL <pid> # kill -SIGKILL <pid>

SIGSEGV - 11 - Core

  • The SIGSEGV signal is generally sent to a process by the kernel when the process is misbehaving, it is used when there is an "Invalid memory reference" and you may commonly see a message such as segmentation fault in log files or via strace. You can also technically call this signal with kill as well; however it is mainly useful for creating core dump files, which can also be performed by using the SIGQUIT signal.

Syntax:

# kill -11 <pid> # kill -SEGV <pid> # kill -SIGSEGV <pid>

SIGTERM - 15 - Term

  • The SIGTERM signal is the default signal sent when invoking the kill command. This tells the process to shutdown and is generally accepted as the signal to use when shutting down cleanly. Technically this signal can be ignored, however that is considered a bad practice and is generally avoided.

Syntax:

# kill <pid> # kill -15 <pid> # kill -TERM <pid> # kill -SIGTERM <pid>

It is a good idea for any sysadmin to get familiar with how signal works (man 7 signal) and what each signal really means, but if you are looking for the TL;DR version. Don't run kill -9 unless you really have to. If the process isn't stopping right away give it a bit more time, or try to find out if the process is waiting on a child process to finish before running kill -9.


Originally Posted on BenCane.com: Go To Article
Categories: FLOSS Project Planets

Joey Hess: adding docker support to propellor

Planet Debian - Tue, 2014-04-01 04:22

Propellor development is churning away! (And leaving no few puns in its wake..)

Now it supports secure handling of private data like passwords (only the host that owns it can see it), and fully end-to-end secured deployment via gpg signed and verified commits.

And, I've just gotten support for Docker to build. Probably not quite work, but it should only be a few bugs away at this point.

Here's how to deploy a dockerized webserver with propellor:

host hostname@"clam.kitenet.net" = Just [ Docker.configured , File.dirExists "/var/www" , Docker.hasContainer hostname "webserver" container ] container _ "webserver" = Just $ Docker.containerFromImage "joeyh/debian-unstable" [ Docker.publish "80:80" , Docker.volume "/var/www:/var/www" , Docker.inside [ serviceRunning "apache2" `requires` Apt.installed ["apache2"] ] ]

Docker containers are set up using Properties too, just like regular hosts, but their Properties are run inside the container.

That means that, if I change the web server port above, Propellor will notice the container config is out of date, and stop the container, commit an image based on it, and quickly use that to bring up a new container with the new configuration.

If I change the web server to say, lighttpd, Propellor will run inside the container, and notice that it needs to install lighttpd to satisfy the new property, and so will update the container without needing to take it down.

Adding all this behavior took only 253 lines of code, and none of it impacts the core of Propellor at all; it's all in Propellor.Property.Docker. (Well, I did need another hundred lines to write a daemon that runs inside the container and reads commands to run over a named pipe... Docker makes running ad-hoc commands inside a container a PITA.)

So, I think that this vindicates the approach of making the configuration of Propellor be a list of Properties, which can be constructed by abitrarily interesting Haskell code. I didn't design Propellor to support containers, but it was easy to find a way to express them as shown above.

Compare that with how Puppet supports Docker: http://docs.docker.io/en/latest/use/puppet/

docker::run { 'helloworld': image => 'ubuntu', command => '/bin/sh -c "while true; do echo hello world; sleep 1; done"', ports => ['4444', '4555'], ...

All puppet manages is running the image and a simple static command inside it. All the complexities that puppet provides for configuring servers cannot easily be brought to bear inside the container, and a large reason for that is, I think, that its configuration file is just not expressive enough.

Categories: FLOSS Project Planets

Open Source I love you!

Planet KDE - Tue, 2014-04-01 04:18

In which we apologize for "monday" being a fluent term, promise that it is coming and essentially talk about Open Source instead - because we should.

Caspar David Friedrich, Wanderer above the sea of fog, 1818 
First of all - "Monday" - I know it's been a tad fickle when it comes to those reports. This is has all to do with the massive amount of work being done and the work posted in the Public Forums. Going through it is a solid days work suddenly which is brilliant! It's also something that suddenly springs on you - much like the sudden burst of great weather we're having here in Sweden. So tomorrow a "monday" report is coming!

...
Until then I want to take this short moment to talk about the magic of Open Source. I recently spent some time trying to fish up local funds from City officials and politicians and found out the hard way how hard it is to describe what Open Source is.
Not just technically but communally - the idea behind the method that is Open Source and why I left everything proprietary behind and switched to it when the design and illustration community is so thoroughly entrenched in proprietary software.

...
Ok so I spent the day cold-calling (IE calling without having a prior contact) local politicians. The discussion went sort of like this:

Politician: So you guys want venture capital to invest in your idea?
Me: No, see since this is something that everyone will use we need the common, the state, the EU etc to help fund it.
Politician: So you're offering this to us for free?
Me: No, we need help funding it. We can't do a years work without paying people.
Politician: So the company you work for want to sell it to the City?
Me: No, we want to give it away when its done so everyone can use it for free.
Politician: So you're giving it to us for free?
Me: No we still need funding.
Politician: So you want venture capital?

Here I had to explain, again, why you can't sell Open Source software in the same way you can sell Proprietary software. Then I had to explain why something that would benefit everyone isn't exactly economically viable to invest in by a company or venture capitalist.
After that I had to explain WHY I did something that I couldn't make any money off. Which got me thinking... Isn't this awesome? Isn't it great that my motivation clearly isn't about making money (aside for paying my bills) but doing something that I know will benefit a rather excluded group in Europe (if not across the globe)? Something that is common - a thing for all?

But how could I explain it without going on into a marxist tangent? (Now, granted I am one, but it doesn't do to use that as an explanation for your actions)

The end result was: because this is the future.

...
Yes we don't have as much cash as proprietary software. Sure, we don't have the business contacts or perhaps even the market savvy.

But the difference is, they are an oared ship trying to catch the wind - while we have full sails up, flowing with it. They can hire hundreds of dedicated developers - we are already thousands. They try to figure out how to create something you can sell - We look at whats needed and do that.

We're not salesmen, we're creators. We're explorers, pioneers, inventors and artists.

At this point one of the politicians asked: "So do you work for Microsoft or Apple?"

...

Tomorrow the "Monday" report will be done and posted! There will be a run through of all the things we're getting done and it will be awesome. Stay tuned!


Categories: FLOSS Project Planets

Hideki Yamane: dot paint characters in motd

Planet Debian - Tue, 2014-04-01 04:14
Do you enjoy April fool jokes? :-)

Best of today's one for me is... "dot paint characters in motd"
I've login to remote server and see it with a bit surprise.

You can show it with copy&paste each gist to /etc/motd.tail.

Categories: FLOSS Project Planets

Junichi Uekawa: Reading up on encrypted file systems.

Planet Debian - Tue, 2014-04-01 04:00
Reading up on encrypted file systems. I've only been using cryptoloop, but ecryptfs seems to be a different approach to the problem, mounting a directory encrypted as opposed to having data.

Categories: FLOSS Project Planets
Syndicate content