FLOSS Project Planets
KDE has recently released the newest Release Candidate of the Applications 15.08 release. Among the new features and changes of this release, there is a technology preview of the new KF5-based KDE PIM suite (including reworked, faster Akonadi internals), new applications ported to KF5 (the most notable ones being Dolphin and Ark). After some consideration and thinking on how to allow users to test this release without affecting their setups too much, the openSUSE community KDE team is happy to bring this latest RC to openSUSE Tumbleweed and openSUSE 13.21.
To install this new release, add the KDE:Applications repository either using YaST or zypper. One special mention is the PIM suite: as upstream KDE labels it as a technology preview, we decided to allow installation only with an explicit choice of the user. To do so, one should install the kmail5 package and the akonadi-server package (other components of the PIM suite are also there, with the 5 suffix): this operation will uninstall the 4.14 packages (but not remove any local data) and install the new version of mail and PIM. To go back, install akonadi-runtime and the respective packages without the 5 suffix (e.g., kmail, korganizer).
Not all packages are available on openSUSE 13.2 due to the version of KF5 and extra-cmake-modules that is shipped there. ↩
A common implementation involves calling a set of functions sequentially with the results of the previous call be passed to the subsequent call.
from math import sqrt, ceil
x = float(value)
x = int(ceil(x))
x = pow(x, 2)
x = sqrt(x)
This is less than ideal because it's verbose and the explicit variable assignment seems unnecessary. However, the inline representation may be a little tough to read, especially if you have longer names, or different fixed arguments.
from math import sqrt, ceil
return sqrt(pow(int(ceil(float(value))), 2))
The other limitation is that the sequence of commands is hard coded. I have to create a function for each variant I may have. However, I may have a need for the ability to compose the sequence dynamically.
One alternative is to use a functional idiom to compose all the functions together into a new function. This new function represent the pipeline the previous set of functions ran the value through. The benefits are that we extract the functions into their own data structure (in this case a tuple). Each element represents a step in the pipeline. You can also build up the sequence dynamically should that be a need.
Here we use foldl aka reduce and some lambda's to create the pipeline from the sequence of functions.
fn_sequence=(float, ceil, int, lambda x: pow(x, 2), sqrt)
transform = reduce(lambda a, b: lambda x: b(a(x)), fn_sequence)
return transform('2.1') # => 3.0
Now I have a convenience function that represents the pipeline of functions. We can extrapolate this type of pipeline solution for more complex and/or more dynamic pipelines, limited only by the sequence of commands. The unfortunate cost to this idiom is the additional n-1 function calls created by the reduce when composing the sequence of functions together. Given this cost, and the cost of function calls in Python is would probably be better to use this in cases where there will be additional reuse of intermediate or final forms of the composition.
On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v7.6.3. It fixes some bugs and adds new features.What is Nikola?
Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter (IPython) Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).
Find out more at the website: https://getnikola.com/Downloads
- New translations: Serbian and Bosnian, by saleone
- Added mechanism for rest extensions to depend on configuration options (Issue #1919)
- Render Jupyter notebooks (ipynb) in listings (Issue #1900)
- Handle folders without trailing slashes in nikola auto (Issue #1933)
- Set a base element to aid relative URL resolution, stripped on-the-fly when using the auto or serve command to view site locally. (Issue #1922)
- Rebuild archives when post slugs and titles change (Issue #1931)
- Handle special characters in URLs in nikola auto (Issue #1925)
- Avoid Broken Pipe error in nikola auto (Issue #1906)
- "nikola auto" serving implicit index.html with wrong mime type (Issue #1921)
- Handle non-integer shutter speeds and other variables in WordPress importer (Issue #1917)
Earlier this week I had a chance to talk with Anthony Scopatz and Katy Huff about their new book, Effective Computation in Physics.
JC: Thanks for giving me a copy of the book when we were at SciPy 2015. It’s a nice book. It’s about a lot more than computational physics.
KH: Right. If you think of it as physical science in general, that’s the group we’re trying to target.
JC: Targeting physical science more than life science?
KH: Yes. You can see that more in the data structures we cover which are very float-based rather than things like strings and locations.
AS: To second that, I’d say that all the examples are coming from the physical sciences. The deep examples, like in the parallelism chapter, are most relevant to physicists.
JC: Even in life sciences, there’s a lot more than sequences of base pairs.
KH: Right. A lot of people have asked what chapters they should skip. It’s probable that ecologists or social scientists are not going to be interested in the chapter about HDF5. But the rest of the book, more or less, could be useful to them.
JC: I was impressed that there’s a lot of scattered stuff that you need to know that you’ve brought into one place. This would be a great book to hand a beginning grad student.
KH: That was a big motivation for writing the book. Anthony’s a professor now and I’m applying to be a professor and I can’t spend all my time ramping students up to be useful researchers. I’d rather say “Here’s a book. It’s yours. Come to me if it’s not in the index.”
JC: And they’d rather have a book that they could access any time than have to come to you. Are you thinking of doing a second edition as things change over time?
AS: It’s on the table to do a second edition eventually. Katy and I will have the opportunity if the book is profitable and the material becomes out of date. O’Reilly could ask someone else to write a second edition, but they would ask us first.
JC: Presumably putting out a second edition would not be as much work as creating the first one.
KH: I sure hope not!
AS: There’s a lot of stuff that’s not in this book. Greg Wilson jokingly asked us when Volume 2 would come out. There may be a need for a more intermediate book that extends the topics.
KH: And maybe targets languages other than Python where you’re going to have to deal with configuring and building, installing and linking libraries, that kind of stuff. I’d like to cover more of that, but Python doesn’t have that problem!
JC: You may sell a lot of books when the school year starts.
KH: Anthony and I both have plans for courses based around this book. Hopefully students will find it helpful.
JC: Maybe someone else is planning the same thing. It would be nice if they told you.
AS: A couple people have approached us about doing exactly that. Something I’d like to see is for people teaching courses around it to pull their curriculum together.
JC: Is there a web site for the book, other than an errata page at the publisher?
KH: Sure, there’s physics.codes. Anthony put that up.
JC: When did y’all start writing the book?
AS: It was April or May last year when we finally started writing. There was a proposal cycle six or seven months before that. Katy and I were simultaneously talking to O’Reilly, so that worked out well.
KH: In a sense, the book process started for me in graduate school with The Hacker Within and Software Carpentry. A lot of the flows in the book come from the outlines of Hacker Within tutorials and Software Carpentry tutorials years ago.
AS: On that note, what happened for me, I took those tutorials and turned them into a masters course for AIMS, African Institute for Mathematical Sciences. At the end I thought it would be nice if this were a book. It didn’t occur to me that there was a book’s worth of material until the end of the course at AIMS. I owe a great debt to AIMS in that way.
JC: Is there something else you’d like to say about the book that we haven’t talked about?
KH: I think it would be a fun exercise for someone to try to determine which half of the chapters I wrote and which Anthony wrote. Maybe using some sort of clustering algorithm or pun detection. If anyone wants to do that sort of analysis, I’d love to see if you guess right. Open competition. Free beer from Katy if you can figure out which half. We split the work in half, but it’s really mixed around. People who know us well will probably know that Anthony’s chapters have a high density of puns.
AS: I think the main point that I would like to see come across is that the book is useful to a broader audience outside the physical sciences. Even for people who are not scientists themselves, it’s useful to describe the mindset of physical scientists to software developers or managers. That communication protocol kinda goes both ways, though I didn’t expect that when we started out.
JC: I appreciate that it’s one book. Obviously it won’t cover everything you need to know. But it’s not like here’s a book on Linux, here’s a book on git, here are several books on Python. And some of the material in here isn’t in any book.
KH: Like licensing. Anthony had the idea to add the chapter on licensing. We get asked all the time “Which license do you use? And why?” It’s confusing, and you can get it really wrong.
* * *
Check out Effective Computation in Physics. It’s about more than physics. It’s a lot of what you need to know to get started with scientific computing in Python, all in one place.
Finally we have the logic for the Scheduler implemented. The basic workflow behind the main scheduling algorithm is based on the priority queue concept. When triggered, the program will begin to evaluate the existing objects or “jobs” and give every single one of them a score. This score mechanic is based on a number of constraints and settings like altitude , angular distance to the moon and starting/finishing time. After the best object was selected, the scheduler proceeds to execute the current job. The execution is based on program states (A clean asynchronous way of handling the code) thus code execution is never blocked.
A new feature that was added is the FITS selection method. Instead of regular object selection, a user could now select a FITS image. The image will then be solved into a set of coordinates (RA/DEC) and the regular object will be constructed.
My work for now on will consist in maintaining and cleaning up the code. I am certain that my experience with KDE will not end with GSoC. Big thanks to Jasem for his assistance and guidance.
Stay tuned for the next update that will include a preview of the GUI which suffered small modifications during this last month :).
I’m getting close to releasing version 3.3.11 of procps. When it gets near that time, I generally browse again the Debian Bug Tracker for procps bugs. Bug number #733758 caught my eye. With the free command if you used the s option before the c option, the s option failed, “seconds argument ‘N’ failed” where N was the number you typed in. The error should be for you trying to type letters for number of seconds. Seemed reasonably simple to test and simple to fix.Take me to the code
The relevant code looks like this:case 's': flags |= FREE_REPEAT; args.repeat_interval = (1000000 * strtof(optarg, &endptr)); if (errno || optarg == endptr || (endptr && *endptr)) xerrx(EXIT_FAILURE, _("seconds argument `%s' failed"), optarg);
Seems pretty stock-standard sort of function. Use strtof() to convert the string into the float.
You need to check both errno AND optarg == endptr because:
- A valid but large float means errno = ERANGE
- A invalid float (e.g. “FOO”) means optarg == endptr
At first I thought the logic was wrong, but tracing through it was fine. I then compiled free using the upstream git source, the program worked fine with s flag with no c flag. Doing a diff between the upstream HEAD and Debian’s 3.3.10 source showed nothing obvious.
I then shifted the upstream git to 3.3.10 too and re-compiled. The Debian source failed, the upstream parsed the s flag fine. I ran diff, no change. I ran md5sum, the hashes matched; what is going on here?I’ll set when I want
The man page says in the case of under/overflow “ERANGE is stored in errno”. What this means is if there isn’t and under/overflow then errno is NOT set to 0, but its just not set at all. This is quite useful when you have a chain of functions and you just want to know something failed, but don’t care what.
Most of the time, you generally would have a “Have I failed?” test and then check errno for why. A typical example is socket calls where anything less than 0 means failure. You check the return value first and then errno. strtof() is one of those funny ones where most people check errno directly; its simpler than checking for +/- HUGE_VAL. You can see though that there are traps.What’s the difference?
OK, so a simple errno=0 above the call fixes it, but why would the Debian source tree have this failure and the upstream not? Even with the same code? The difference is how they are compiled.
The upstream compiles free like this:gcc -std=gnu99 -DHAVE_CONFIG_H -I. -include ./config.h -I./include -DLOCALEDIR=\"/usr/local/share/locale\" -Iproc -g -O2 -MT free.o -MD -MP -MF .deps/free.Tpo -c -o free.o free.c mv -f .deps/free.Tpo .deps/free.Po /bin/bash ./libtool --tag=CC --mode=link gcc -std=gnu99 -Iproc -g -O2 ./proc/libprocps.la -o free free.o strutils.o fileutils.o -ldl libtool: link: gcc -std=gnu99 -Iproc -g -O2 -o .libs/free free.o strutils.o fileutils.o ./proc/.libs/libprocps.so -ldl
While Debian has some hardening flags:gcc -std=gnu99 -DHAVE_CONFIG_H -I. -include ./config.h -I./include -DLOCALEDIR=\"/usr/share/locale\" -D_FORTIFY_SOURCE=2 -Iproc -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -MT free.o -MD -MP -MF .deps/free.Tpo -c -o free.o free.c mv -f .deps/free.Tpo .deps/free.Po /bin/bash ./libtool --tag=CC --mode=link gcc -std=gnu99 -Iproc -g -O2 -fstack-protector-strong -Wformat -Werror=format-security ./proc/libprocps.la -Wl,-z,relro -o free free.o strutils.o fileutils.o -ldl libtool: link: gcc -std=gnu99 -Iproc -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z -Wl,relro -o .libs/free free.o strutils.o fileutils.o ./proc/.libs/libprocps.so -ldl
It’s not the compiling of free itself that is doing it, but the library. Most likely something that is called before the strtof() is setting errno which this code then falls into. In fact if you run the upstream free linked to the Debian procps library it fails.
Moral of the story is to set errno before the function is called if you are going to depend on it for checking if the function succeeded.
I am pleased to say that now I am sponsored by Blue systems GmBH to work on Plasma mobile and Plasma mobile applications full-time for next 4 months.
During this time period I will be mainly working on Plasma Mobile. I will develop the new Applications for Plasma Mobile as well as port existing ones to work in Plasma Mobile.
It is awesome opportunity for me, I will start to work on this 15th August.
So in short,
The tape archiver, better known as tar, is one of the older backup programs in existence.
It's not very good at automated incremental backups (for which bacula is a good choice), but it can be useful for "let's take a quick snapshot of the current system" type of situations.
As I'm preparing to head off to debconf tomorrow, I'm taking a backup of my n-1 laptop (which still contains some data that I don't want to lose) so it can be reinstalled and used by the Debconf video team. While I could use a "proper" backup system, running tar to a large hard disk is much easier.
By default, however, tar won't preserve everything, so it is usually a good idea to add some extra options. This is what I' mrunning currently:sudo tar cvpaSf player.local:carillon.tgz --rmt-command=/usr/sbin/rmt --one-file-system /
which breaks down to create tar archive, verbose output, preserve permissions, automatically determine compression based on file extension, handle Sparse files efficiently, write to a file on a remote host using /usr/sbin/rmt as the rmt program, don't descend into a separate filesystem (since I don't want /proc and /sys etc to be backed up), and back up my root partition.
Since I don't believe there's any value to separate file systems on a laptop, this will back up the entire contents of my n-1 laptop to the carillon.tgz in my home directory on player.local.
For announcements of most new GNU releases, subscribe to the info-gnu mailing list: https://lists.gnu.org/mailman/listinfo/info-gnu.
To download: nearly all GNU software is available from https://ftp.gnu.org/gnu/, or preferably one of its mirrors from https://www.gnu.org/prep/ftp.html. You can use the url https://ftpmirror.gnu.org/ to be automatically redirected to a (hopefully) nearby and up-to-date mirror.
This month, we welcome Assaf Gordon as a new comaintainer of GNU Coreutils.
A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see https://www.gnu.org/server/takeaction.html#unmaint if you'd like to help. The general page on how to help GNU is at https://www.gnu.org/help/help.html.
If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see https://www.gnu.org/help/evaluation.html.
As always, please feel free to write to us at firstname.lastname@example.org with any GNUish questions or suggestions for future installments.
This guest post was submitted by Ole Tange, maintainer of GNU Parallel.
I am the maintainer of a piece of free software called GNU Parallel. Free software guarantees you access to the source code, but I have been wondering how many actually read the source code.
To test this I put in a comment telling people to email me when they read this. The comment was put in a section of the code that no one would look to fix or improve the software -- so, the source code equivalent to a dusty corner. To make sure the comment would not show up if some one just grepped through the source code I rot13'ed the source code.
Two-and-a-half months later I received an email from someone who not only managed to find the comment, but also managed to guess the code had to be rot13'ed.
The first cookie was released on January 24, 2011 and was won by AEvar Arnfjord Bjarmason on April 10, 2011.
I inserted a new cookie on August 18, 2013, that was a bit harder as you would have to use rot14. On July 19, 2015 Mark Maimone won that cookie.
This brings me to the conclusion that there are people who are not affiliated with the project who will read the source code -- though it may not happen very often.
Join the Free Software Foundation and friends in Boston, MA, USA on the evening of Saturday, October 3rd for our 30th Birthday Party. We'll share hors d'oeuvres, drinks, and an address by FSF founder and president Richard Stallman, as well as plenty of social time for catching up with old friends and making new ones.
If the free software movement is coming together for a party, we might as well get some work done, too. We're planning a mini-conference for the day of October 3rd, before the party, where we'll share what we've learned from the first thirty years of the free software movement and swap ideas about the future. Stay tuned for more details about this, as well as a possible dinner on Friday night.
Bookmark the event homepage for lodging suggestions and more information about the mini-conference and other festivities that weekend, coming soon.Not coming to Boston?
We've been flattered by supporters around the world asking to hold their own local events for the FSF's birthday. Of course! We'd even love to write about it, or come up with a creative way of connecting it to the event in Boston. Contact us at email@example.com if you're interested.
We also intend to stream the event and post videos online afterwards.Support our work for computer user freedom
Our supporters have made our thirty wonderful years possible. By becoming an associate member you'll help us achieve even more in the next thirty. Members also get special benefits, including gratis admission to our LibrePlanet conference each spring.
If you are interested in helping out at the mini-conference or the party, we welcome you! In addition to setting up the venue and greeting guests, we need people with skills in free software livestreaming. All volunteers will receive a special reverse birthday gift from us to you.
The FSF is also seeking general event, beer, or food sponsors. To sponsor or recommend a sponsor, or to volunteer, reply to this email.
Also, we'd like to introduce Georgia Young, our newest FSF staffer, in the role of program manager. Georgia is planning the thirtieth birthday events, so expect to hear more from her soon.
See you in October!
Read the New Yorker Article, The GNU Manifesto Turns Thirty by Maria Bustillos.
Join the FSF and friends this Friday, July 24 — at a new time — from 12pm to 3pm EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode. There are also weekly FSD Meetings pages that everyone is welcome to contribute to before, during, and after each meeting.
Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.
While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential to be a resource of even greater value. But it needs your help!
If you are eager to help and you can't wait or are simply unable to join on IRC on Friday, our participation guide will provide you with all the information you need to get started helping the Directory today!
Still can't figure out what time the meeting starts? Open up a terminal and enter the following to get the start time in your time zone:
date --date='TZ="America/New_York" 12:00 this Fri'
The Seattle GNU/Linux Conference -- we like to call it SeaGL -- is the Emerald City’s best grassroots technical conference for free and libre software. The 3rd annual conference happens Friday, October 23 and Saturday, October 24 at Seattle Central College, and it’s already shaping up to be better than last year!
First, we’re thrilled to announce the keynote addresses will be delivered by the FSF’s own Richard M. Stallman, and Shauna Gordon-McKeon, the main organizer of OpenHatch’s campus events. This year we’re honoring the origins of free software while recognizing the importance of growing the movement through recruiting new activists, users, and enthusiasts.
We are also accepting nominations for the first annual Cascadia Community Builder award recognizing a person who has significantly contributed to the free software movement in Washington, Oregon, British Columbia, or Idaho. Please take a minute to nominate someone who is doing great community work in the area!
We’re also looking for speakers. Our Call for Participation is open until July 26. SeaGL welcomes a diverse range of topics. It doesn’t matter if this is your first conference presentation or your fifteenth; If you’re excited about a topic related to GNU/Linux or free software, then we want to hear about it. We’ll be helping folks edit and flesh out their proposals in our IRC channel, #seagl, a few times over the next couple of weeks.
Finally, for groups or businesses interested in sponsoring the event, the Exhibitor & Sponsor Prospectus is now posted. We will have a small hall with tables available for sponsors and exhibitors, with free tables available for local nonprofit or educational organizations.
Here are the details:
Friday, October 23 to Saturday, October 24, 2015
Seattle Central College
Cost: Free (as in beer). No registration necessary.
We have extended the application deadline for fall internships to August 7th.
Do you want to help people learn why free software matters, and how to use it? Do you want to dig deep into software freedom issues like copyleft, Digital Restrictions Management, or surveillance and encryption?
These positions are unpaid, but the FSF will provide any appropriate documentation you might need to receive funding and school credit from outside sources. We place an emphasis on providing hands-on educational opportunities for interns in which they work closely with staff mentors on semester-long projects that match their skills and interest.
Our current campaigns intern is focusing on computer user privacy and encryption, expanding our Email Self-Defense project. Past licensing interns have worked to improve the Free Software Directory and analyzed the compatibility of other licenses with the GPL. And a past sysadmin intern did extensive work to set up our StatusNet instance.
Fall internships begin on or about August 31st and run through December 4th. We prefer candidates who are able to work in our Boston office, but may consider remote interns. The deadline to apply for a fall internship at the Free Software Foundation is July 31st.
To apply, send a letter of interest and a resume with two references to firstname.lastname@example.org. Please send all application materials in free software-friendly formats like .pdf, .odt, and .txt. Use "Fall internship application" as the subject line of your email. Please include links to your writing, design, or coding work if it applies -- personal, professional, or class work is acceptable. URLs are preferred, though email attachments in free formats are acceptable, too. Learn more about our internships, and direct any questions to email@example.com.