FLOSS Project Planets

Subsurface 4.2 Beta1 (4.1.90)

Planet KDE - Wed, 2014-07-16 13:46

 

It’s been a little over two months since our last release — so there’s a chance that we will meet our plans of a release “about every three months”.

This is the first beta, so be careful — have backups of your data files and watch out for changes you didn’t intend. There are a few bugs hiding still, check (and report) at our bugtracker.

Some of the interesting new features in 4.2:

the dive planner got added back again, using the graphical profile editor
pictures can be associated with dives and shown in the profile
printing was quite a bit improved
data entry for dives is much more intuitive and consistent
first steps towards an HTML exporter
support for importing dive log files from Seabear dive computers
a user survey was added to learn more about the needs of out users

Please let us know what you think!


Categories: FLOSS Project Planets

Freelock : Performance problem: N! database calls

Planet Drupal - Wed, 2014-07-16 12:34

Kicking off some posts about various performance challenges we've fixed.

N Factorial

During a code review for a site we were taking over, I found this little gem:

<?php

function charity_view_views_pre_render($view) {
  // this code takes the rows returned from a view query after the query has been run, and formats it for display...
  // snip to the code of interest:
  usort($view->result, 'charity_view_sort_popular');
}

PerformanceTechnicalDrupalDrupal Planet
Categories: FLOSS Project Planets

Plasma 5 in the News

Planet KDE - Wed, 2014-07-16 11:14

Plasma 5 was released yesterday and the internet is ablaze with praise and delight at the results.

Slashdot just posted their story KDE Releases Plasma 5 and the first comment is “Thank our KDE developers for their hard work. I’m really impressed by KDE and have used it a lot over the years.“  You’re welcome Slashdot reader.

They point to themukt’s Details Review of Plasma 5 which rightly says “Don’t be fooled, a lot of work goes down there“.

The science fictional name reflects the futuristic technology which is already ahead of its time (companies like Apple, Microsoft, Google and Canonical are using ideas conceptualized or developed by the KDE community).

With the release of Plasma 5, the community has once again shown that free software code developed without any dictator or president or prime-minister or chancellor can be one of the best software code.

LWN has a short story on KDE Plasma 5.0 which is as usual the place to go for detailed comments including several from KDE developers.

ZDNet’s article KDE Plasma 5 Linux desktop arrives says “I found this new KDE Plasma 5 to be a good, solid desktop” and concludes “I expect most, if not all, KDE users to really enjoy this new release. Have fun!” And indeed we do have lots of fun.

And Techage gets nonenculture wrong in KDE 5 is Here: Introducing a Cleaner Frontend & An Overhauled Backend and says “On behalf of KDE fans everywhere, thank you, KDE dev team” aah, it’s nice to be thanked

Web UPD8 has How To Install KDE Plasma 5 In Kubuntu 14.10 Or 14.04 a useful guide to setting it up which even covers removing it when you want to go back to the old stuff.  Give it a try!

 

Categories: FLOSS Project Planets

Drupal Association News: How Your Membership Gives Back to the Community

Planet Drupal - Wed, 2014-07-16 11:05

When people ask me, “What’s Drupal?” I find it a complex answer. Of course, in a technical sense, Drupal is a CMS— but to me, and to many others, it’s far more than that. It’s a community full of amazing people with inspiring leaders and huge hearts.

At DrupalCon Austin, I was able to share several stories about community members who really pushed the project further, all with the help of community cultivation grants selected and financed by the men and women who love Drupal. I want to thank Gabor, Sheng, and Tatiana for letting me share their stories and I'd like to share these stories with all of you.

Drupal Dev Days: A Week of Sprints in Szeged

Earlier this year, Gábor Hojtsy organized a dev days event that was a huge success. From March 24 to March 30 this year, three hundred people gathered in Szeged to sprint together on Drupal 8 core, Drupal 8 Search and Entity Field API, Documentation, Migration, MongoDB, REST and authentication, Rules for Drupal 8 and, of course, Drupal.org.

There was so much happening, they almost brought D.O to a halt— but fortunately, everything came out OK, and we had huge improvements to Drupal.org as a result.

There were big benefits to Drupal 8 at Szeged, too. Some of the things that our great sprinters accomplished were:

  •  115 core commits with 706 files changed, 10967 insertions(+), 6849 deletions(-)
  •  19 beta blocker and beta target issues fixed

It was the community that made Dev Days Szeged so great. By turning out and sprinting, they made big improvements to the project, while a community development grant funded part of the Internet fees. It's an important element of any sprint, but the real significance is that Drupal Association members who could not attend the sprints or are not in a role to contribute code were still able to help achieve this success by funding it through their membership.

DrupalCamp Shanghai

Sheng is the community leader of the Shanghai community, and as an ex-New Yorker, he knew firsthand how important Drupal meetups and camps are for networking and learning. After he moved to Shanghai, he decided that he wanted to share the valuable experience of face-to-face time with his new local community, which had skilled developers who were mostly disconnected from each other and the wider global community.

After building momentum through holding a number of meet ups, Sheng applied for grants in 2013 and 2014 to put on a Shanghai Drupal camp. With the funds, he flew in a Drupal Rock Star to come keynote each of the camps — Forest Mars and John Albin — and the camp doubled in size and there are now hundreds of people who come out to these camps.

While the investment of flying John Albin out to Shanghai from Taiwan was relatively small, the impact and ROI was huge both for the camps and for the greater Drupal community: camp attendees learned to contribute back to the larger global community, almost like a small R&D investment. It wouldn’t have been possible without a community grant.

DrupalCamp Donetsk

Tatiana of the Drupal Association worked with her colleagues in Donetsk to put together a DrupalCamp in Donetsk, in spite of the revolution. A lot of people came together and connected both to each other and to the global community, and used a grant to pay for the food and coffee— and for us at the Association, that grant stands as a sign of positive support from the greater Drupal community in spite of the strife that was going on in Donetsk.

In the end, lots of thanks needs to go around. First, I’d like to thank Gabor, Sheng and Tatiana and all community leaders for turning your vision into reality and for the time and passion you pour into Drupal. We are appreciate all that you do to unite and grow Drupal.

Secondly, none of this would be possible without the three community leaders who manage the volunteer program: Mike Anello, Amy Scarvada, and Thomas Turnbull. Your passion for growing the community is doing great things.

Finally, I want to issue a big thank you to our Drupal Association Members for making these stories a reality. If you want to be come a member and help more of these programs around the world come to life, please sign up today at https://assoc.drupal.org/membership.

Categories: FLOSS Project Planets

AppStream 0.7 specification and library released

Planet KDE - Wed, 2014-07-16 11:03

Today I am very happy to announce the release of AppStream 0.7, the second-largest release (judging by commit number) after 0.6. AppStream 0.7 brings many new features for the specification, adds lots of good stuff to libappstream, introduces a new libappstream-qt library for Qt developers and, as always, fixes some bugs.

Unfortunately we broke the API/ABI of libappstream, so please adjust your code accordingly. Apart from that, any other changes are backwards-compatible. So, here is an overview of what’s new in AppStream 0.7:

Specification changes

Distributors may now specify a new <languages/> tag in their distribution XML, providing information about the languages a component supports and the completion-percentage for the language. This allows software-centers to apply smart filtering on applications to highlight the ones which are available in the users native language.

A new addon component type was added to represent software which is designed to be used together with a specific other application (think of a Firefox addon or GNOME-Shell extension). Software-center applications can group the addons together with their main application to provide an easy way for users to install additional functionality for existing applications.

The <provides/> tag gained a new dbus item-type to expose D-Bus interface names the component provides to the outside world. This means in future it will be possible to search for components providing a specific dbus service:

$ appstream-index what-provides dbus org.freedesktop.PackageKit.desktop system

(if you are using the cli tool)

A <developer_name/> tag was added to the generic component definition to define the name of the component developer in a human-readable form. Possible values are, for example “The KDE Community”, “GNOME Developers” or even the developer’s full name. This value can be (optionally) translated and will be displayed in software-centers.

An <update_contact/> tag was added to the specification, to provide a convenient way for distributors to reach upstream to talk about changes made to their metadata or issues with the latest software update. This tag was already used by some projects before, and has now been added to the official specification.

Timestamps in <release/> tags must now be UNIX epochs, YYYYMMDD is no longer valid (fortunately, everyone is already using UNIX epochs).

Last but not least, the <pkgname/> tag is now allowed multiple times per component. We still recommend to create metapackages according to the contents the upstream metadata describes and place the file there. However, in some cases defining one component to be in multiple packages is a short way to make metadata available correctly without excessive package-tuning (which can become difficult if a <provides/> tag needs to be satisfied).

As small sidenote: The multiarch path in /usr/share/appdata is now deprecated, because we think that we can live without it (by shipping -data packages per library and using smarter AppStream metadata generators which take advantage of the ability to define multiple <pkgname/> tags)

Documentation updates

In general, the documentation of the specification has been reworked to be easier to understand and to include less duplication of information. We now use excessive crosslinking to show you the information you need in order to write metadata for your upstream project, or to implement a metadata generator for your distribution.

Because the specification needs to define the allowed tags completely and contain as much information as possible, it is not very easy to digest for upstream authors who just want some metadata shipped quickly. In order to help them, we now have “Quickstart pages” in the documentation, which are rich of examples and contain the most important subset of information you need to write a good metadata file. These quickstart guides already exist for desktop-applications and addons, more will follow in future.

We also have an explicit section dealing with the question “How do I translate upstream metadata?” now.

More changes to the docs are planned for the next point releases. You can find the full project documentation at Freedesktop.

AppStream GObject library and tools

The libappstream library also received lots of changes. The most important one: We switched from using LGPL-3+ to LGPL-2.1+. People who know me know that I love the v3 license family of GPL licenses – I like it for tivoization protection, it’s explicit compatibility with some important other licenses and cosmetic details, like entities not loosing their right to use the software forever after a license violation. However, a LGPL-3+ library does not mix well with projects licensed under other open source licenses, mainly GPL-2-only projects. I want libappstream to be used by anyone without forcing the project to change its license. For some reason, using the library from proprietary code is easier than using it from a GPL-2-only open source project. The license change was also a popular request of people wanting to use the library, so I made the switch with 0.7. If you want to know more about the LGPL-3 issues, I recommend reading this blogpost by Nikos (GnuTLS).

On the code-side, libappstream received a large pile of bugfixes and some internal restructuring. This makes the cache builder about 5% faster (depending on your system and the amount of metadata which needs to be processed) and prepares for future changes (e.g. I plan to obsolete PackageKit’s desktop-file-database in the long term).

The library also brings back support for legacy AppData files, which it can now read. However, appstream-validate will not validate these files (and kindly ask you to migrate to the new format).

The appstream-index tool received some changes, making it’s command-line interface a bit more modern. It is also possible now to place the Xapian cache at arbitrary locations, which is a nice feature for developers.

Additionally, the testsuite got improved and should now work on systems which do not have metadata installed.

Of course, libappstream also implements all features of the new 0.7 specification.

With the 0.7 release, some symbols were removed which have been deprecated for a few releases, most notably as_component_get/set_idname, as_database_find_components_by_str, as_component_get/set_homepage and the “pkgname” property of AsComponent (which is now a string array and called “pkgnames”). API level was bumped to 1.

Appstream-Qt

A Qt library to access AppStream data has been added. So if you want to use AppStream metadata in your Qt application, you can easily do that now without touching any GLib/GObject based code!

Special thanks to Sune Vuorela for his nice rework of the Qt library!

And that’s it with the changes for now! Thanks to everyone who helped making 0.7 ready, being it feedback, contributions to the documentation, translation or coding. You can get the release tarballs at Freedesktop. Have fun!

Categories: FLOSS Project Planets

Matthias Klumpp: AppStream 0.7 specification and library released

Planet Debian - Wed, 2014-07-16 11:03

Today I am very happy to announce the release of AppStream 0.7, the second-largest release (judging by commit number) after 0.6. AppStream 0.7 brings many new features for the specification, adds lots of good stuff to libappstream, introduces a new libappstream-qt library for Qt developers and, as always, fixes some bugs.

Unfortunately we broke the API/ABI of libappstream, so please adjust your code accordingly. Apart from that, any other changes are backwards-compatible. So, here is an overview of what’s new in AppStream 0.7:

Specification changes

Distributors may now specify a new <languages/> tag in their distribution XML, providing information about the languages a component supports and the completion-percentage for the language. This allows software-centers to apply smart filtering on applications to highlight the ones which are available in the users native language.

A new addon component type was added to represent software which is designed to be used together with a specific other application (think of a Firefox addon or GNOME-Shell extension). Software-center applications can group the addons together with their main application to provide an easy way for users to install additional functionality for existing applications.

The <provides/> tag gained a new dbus item-type to expose D-Bus interface names the component provides to the outside world. This means in future it will be possible to search for components providing a specific dbus service:

$ appstream-index what-provides dbus org.freedesktop.PackageKit.desktop system

(if you are using the cli tool)

A <developer_name/> tag was added to the generic component definition to define the name of the component developer in a human-readable form. Possible values are, for example “The KDE Community”, “GNOME Developers” or even the developer’s full name. This value can be (optionally) translated and will be displayed in software-centers.

An <update_contact/> tag was added to the specification, to provide a convenient way for distributors to reach upstream to talk about changes made to their metadata or issues with the latest software update. This tag was already used by some projects before, and has now been added to the official specification.

Timestamps in <release/> tags must now be UNIX epochs, YYYYMMDD is no longer valid (fortunately, everyone is already using UNIX epochs).

Last but not least, the <pkgname/> tag is now allowed multiple times per component. We still recommend to create metapackages according to the contents the upstream metadata describes and place the file there. However, in some cases defining one component to be in multiple packages is a short way to make metadata available correctly without excessive package-tuning (which can become difficult if a <provides/> tag needs to be satisfied).

As small sidenote: The multiarch path in /usr/share/appdata is now deprecated, because we think that we can live without it (by shipping -data packages per library and using smarter AppStream metadata generators which take advantage of the ability to define multiple <pkgname/> tags)

Documentation updates

In general, the documentation of the specification has been reworked to be easier to understand and to include less duplication of information. We now use excessive crosslinking to show you the information you need in order to write metadata for your upstream project, or to implement a metadata generator for your distribution.

Because the specification needs to define the allowed tags completely and contain as much information as possible, it is not very easy to digest for upstream authors who just want some metadata shipped quickly. In order to help them, we now have “Quickstart pages” in the documentation, which are rich of examples and contain the most important subset of information you need to write a good metadata file. These quickstart guides already exist for desktop-applications and addons, more will follow in future.

We also have an explicit section dealing with the question “How do I translate upstream metadata?” now.

More changes to the docs are planned for the next point releases. You can find the full project documentation at Freedesktop.

AppStream GObject library and tools

The libappstream library also received lots of changes. The most important one: We switched from using LGPL-3+ to LGPL-2.1+. People who know me know that I love the v3 license family of GPL licenses – I like it for tivoization protection, it’s explicit compatibility with some important other licenses and cosmetic details, like entities not loosing their right to use the software forever after a license violation. However, a LGPL-3+ library does not mix well with projects licensed under other open source licenses, mainly GPL-2-only projects. I want libappstream to be used by anyone without forcing the project to change its license. For some reason, using the library from proprietary code is easier than using it from a GPL-2-only open source project. The license change was also a popular request of people wanting to use the library, so I made the switch with 0.7. If you want to know more about the LGPL-3 issues, I recommend reading this blogpost by Nikos (GnuTLS).

On the code-side, libappstream received a large pile of bugfixes and some internal restructuring. This makes the cache builder about 5% faster (depending on your system and the amount of metadata which needs to be processed) and prepares for future changes (e.g. I plan to obsolete PackageKit’s desktop-file-database in the long term).

The library also brings back support for legacy AppData files, which it can now read. However, appstream-validate will not validate these files (and kindly ask you to migrate to the new format).

The appstream-index tool received some changes, making it’s command-line interface a bit more modern. It is also possible now to place the Xapian cache at arbitrary locations, which is a nice feature for developers.

Additionally, the testsuite got improved and should now work on systems which do not have metadata installed.

Of course, libappstream also implements all features of the new 0.7 specification.

With the 0.7 release, some symbols were removed which have been deprecated for a few releases, most notably as_component_get/set_idname, as_database_find_components_by_str, as_component_get/set_homepage and the “pkgname” property of AsComponent (which is now a string array and called “pkgnames”). API level was bumped to 1.

Appstream-Qt

A Qt library to access AppStream data has been added. So if you want to use AppStream metadata in your Qt application, you can easily do that now without touching any GLib/GObject based code!

Special thanks to Sune Vuorela for his nice rework of the Qt library!

And that’s it with the changes for now! Thanks to everyone who helped making 0.7 ready, being it feedback, contributions to the documentation, translation or coding. You can get the release tarballs at Freedesktop. Have fun!

Categories: FLOSS Project Planets

Dirk Eddelbuettel: Introducing RcppParallel: Getting R and C++ to work (some more) in parallel

Planet Debian - Wed, 2014-07-16 10:56
A common theme over the last few decades was that we could afford to simply sit back and let computer (hardware) engineers take care of increases in computing speed thanks to Moore's law. That same line of thought now frequently points out that we are getting closer and closer to the physical limits of what Moore's law can do for us.

So the new best hope is (and has been) parallel processing. Even our smartphones have multiple cores, and most if not all retail PCs now possess two, four or more cores. Real computers, aka somewhat decent servers, can be had with 24, 32 or more cores as well, and all that is before we even consider GPU coprocessors or other upcoming changes.

And sometimes our tasks are embarassingly simple as is the case with many data-parallel jobs: we can use higher-level operations such as those offered by the base R package parallel to spawn multiple processing tasks and gather the results. I covered all this in some detail in previous talks on High Performance Computing with R (and you can also consult the Task View on High Performance Computing with R which I edit).

But sometimes we can't use data-parallel approaches. Hence we have to redo our algorithms. Which is really hard. R itself has been relying on the (fairly mature) OpenMP standard for some of its operations. Luke Tierney's (awesome) keynote in May at our (sixth) R/Finance conference mentioned some of the issues related to OpenMP. One which matters is that OpenMP works really well on Linux, and either not so well (Windows) or not at all (OS X, due the usual issue with the gcc/clang switch enforced by Applem but the good news is that the OpenMP toolchain is expected to make it to OS X is some more performant form "soon"). R is still expected to make wider use of OpenMP in future versions.

Another tool which has been around for a few years, and which can be considered to be equally mature is the Intel Threaded Building Blocks library, or TBB. JJ recently started to wrap this up for use by R. The first approach resulted in a (now superseded, see below) package TBB. But hardware and OS issues bite once again, as the Intel TBB is not really building that well for the Windows toolchain used by R (and based on MinGW).

(And yes, there are two more options. But Boost Threads requires linking which precludes easy use as e.g. via our BH package. And C++11 with its threads library (based on Boost Threads) is not yet as widely available as R and Rcpp which means that it is not a real deployment option yet.)

Now, JJ, being as awesome as he is, went back to the drawing board and integrated a second threading toolkit: TinyThread++, a small header-only library without further dependencies. Not as feature-rich as Intel Threaded Building Blocks, but at least available everywhere. So a new package RcppParallel, so far only on GitHub, wraps around both TinyThread++ and Intel Threaded Building Blocks and offers a consistent interface available on all platforms used by R.

Better still, JJ also authored several pieces demonstrating this new package for the Rcpp Gallery:

All four are interesting and demonstrate different aspects of parallel computing via RcppParallel. But the last article is key. Based on a question by Jim Bullard, and then written with Jim, it shows how a particular matrix distance metric (which is missing from R) can be implemented in a serial manner in both R, and also via Rcpp. The key implementation, however, uses both Rcpp and RcppParallel and thereby achieves a truly impressive speed gain as the gains from using compiled code (via Rcpp) and from using a parallel algorithm (via RcppParallel) are multiplicative! Between JJ's and my four-core machines the gain was between 200 and 300 fold---which is rather considerable. For kicks, I also used a much bigger machine at work which came in at an even larger speed gain (but gains become clearly sublinear as the number of cores increases; there are however some tuning parameters).

So these are exciting times. I am sure there will be lots more to come. For now, head over to the RcppParallel package and start playing. Further contributions to the Rcpp Gallery are not only welcome but strongly encouraged.
Categories: FLOSS Project Planets

Python Anywhere: Outage Report for 15 July 2014

Planet Python - Wed, 2014-07-16 10:27

After a lengthy outage last night, we want to let you know about the events that led up to it and how we can improve our outage responses to reduce or eliminate downtime when things go wrong.

As usual with these things, there is no single cause to be blamed. It was a confluence of a number of things happening together:

  1. An AWS instance that was running a critical piece of infrastructure had a hardware failure
  2. We performed a non-standard deploy earlier in the day
  3. We took too long to realize the severity of the hardware failure

It is a fact of life that our machines run on physical hardware. As much as the cloud, in general, and AWS, in particular, try to insulate us from that fact. Hardware fails and we need to deal with it when it does. In fact, we believe that a large part of the long-term value of PythonAnywhere is that we deal with it so you don't have to.

Since our early days, we have been finding and eliminating single points of failure to increase the robustness of our service, but there are still a few left and we have plans to eliminate them, too. One of the remaining ones is the file server and that's the machine that suffered the hardware failure last night.

The purpose of the file server is to make your private file storage available to all of the web, console, and scheduled task servers. It does this by owning a set of Elastic Block Storage devices, arranged in a RAID cluster, and sharing them out over NFS. This means that we can easily upgrade the file server hardware, and simply move the data storage volumes over from the old hardware to the new.

Under normal operation, we have a backup server that has a live, constantly updated copy of the data from the file server, so in the event of either a file server outage, or a hardware problem with the file server's attached storage, we can switch across to using that instead. However, yesterday, we upgraded the storage space on the backup server and switched to SSDs. This meant that, instead of starting off with a hot backup, we had a period where the backup server disks were empty and were syncing with the file server. So we had no fallback when the file server died. Just to be completely clear -- the data storage volumes themselves were unaffected. But the file server that was connected to them, and which serves them out via NFS, crashed hard.

With all of that in mind, we decided to try to resurrect the file server. On the advice of AWS support, we tried to stop the machine and restart it so it would spin up on new hardware. The stop ended up taking an enormous amount of time and then restarting it took a long time, too. After trying several times and poring over boot logs, we determined that the boot disk of the instance had been corrupted by the hardware failure. Now the only path we could see to a working cluster was to create an entirely new one which could take over and use the storage disks from the dead file server. So we kicked off the build (which takes about 20min) and waited. After re-attaching the disks, we checked that they were intact and switched over to the new cluster.

Lessons and Responses Long term
  • A file server hardware failure is a big deal for us, it takes everything down. Even under normal circumstances switching across to the backup is a manual process and takes several minutes. And, as we saw, rare circumstances can make it significantly worse. We need to remove it as a single point of failure.
Short term
  • A new cluster may be necessary to prevent extended downtime like we had last night. So our first response to failure of the file server must be to start spinning up a new cluster so it's available if we need it. This should mean that our worst downtime could be about 40 mins from when we start working on it to having everything back up.
  • We need to ensure that all deploys (even ones where we're changing the storage on either the file or backup servers) start with the backup server pre-seeded with data so the initial sync can be completed quickly.

We have learned important lessons from this outage and we'll be using them to improve our service. We would like to extend a big thank you and a grovelling apology to all our loyal customers who were extremely patient with us.

Categories: FLOSS Project Planets

Bryan Pendleton: What an innocuous headline...

Planet Apache - Wed, 2014-07-16 10:15

7 safe securities that yield more than 4%

Eveans and his team analyze more than 7,000 securities worldwide but only buy names that offer payouts no less than double the yield of the overall stock market — as well as reasonable valuation and competitive advantages that will keep earnings growing over time.

Sounds like a pleasant article to read, no?

Well, it turns out that the companies that they are recommending you invest in are:

  • Cigarette companies (Philip Morris)
  • Oil companies (Vanguard Resources, Williams Partners)
  • Leveraged buyout specialists (KKR)

I guess the good news is that they didn't include any arms dealers or pesticide manufacturers.

Categories: FLOSS Project Planets

Edward J. Yoon: 진심은 통하지 않는다.

Planet Apache - Wed, 2014-07-16 09:12
시장을 더럽히는자들이 오피니언 리더인 경우가 더러 있다. 정치도 그렇고 IT도 ..
진심은 잘 통하지 않는다.

세상 돌아가는 원리는 순수한 마음이 아니라 기브앤 테이크이기 때문에.
Categories: FLOSS Project Planets

Review: Penetration Testing with the Bash shell by Keith Makan – Packt Pub.

LinuxPlanet - Wed, 2014-07-16 09:07

Penetration Testing with the Bash shell

I’ll have to say that, for some reason, I thought this book was going to be some kind of guide to using only bash itself to do penetration testing. It’s not that at all. It’s really more like doing penetration testing FROM the bash shell, or command line of you like.

Your first 2 chapters take you through a solid amount of background bash shell information. You cover topics like directory manipulation, grep, find, understanding some regular expressions, all the sorts of things you will appreciate knowing if you are going to be spending some time at the command line, or at least a good topical smattering. There is also some time spent on customization of your environment, like prompts and colorization and that sort of thing. I am not sure it’s really terribly relevant to the book topic, but still, as I mentioned before if you are going to be spending time at the command line, this is stuff that’s nice to know. I’ll admit that I got a little charge out of it because my foray into the command line was long ago on an amber phosphorous serial terminal. We’ve come a long way, Baby

The remainder of the book deals with some command line utilities and how to use them in penetration testing. At this point I really need to mention that you should be using Kali Linux or BackTrack Linux because some of the utilities they reference are not immediately available as packages in other distributions. If you are into this topic, then you probably already know that, but I just happened to be reviewing this book while using a Mint system while away from my test machine and could not immediately find a package for dnsmap.

The book gets topically heavier as you go through, which is a good thing IMHO, and by the time you are nearing the end you have covered standard bash arsenal commands like dig and nmap. You have spent some significant time with metasploit and you end up with the really technical subjects of disassembly (reverse engineering code) and debugging. Once you are through that you dive right into network monitoring, attacks and spoofs. I think the networking info should have come before the code hacking but I can also see their logic in this roadmap as well. Either way, the information is solid and sensical, it’s well written and the examples work. You are also given plenty of topical reference information should you care to continue your research, and this is something I think people will really appreciate.

To sum it up, I like the book. Again, it wasn’t what I thought it was going to be, but it surely will prove to be a valuable reference, especially combined with some of Packt’s other fine books like those on BackTrack. Buy your copy today!

Categories: FLOSS Project Planets

Acquia: Drupal for Digital Commerce – Bojan Živanović

Planet Drupal - Wed, 2014-07-16 08:49

Bojan and I chatted at Drupal Dev Days 2014 about one of the newest and most important weapons available in Drupal's eCommerce arsenal: recurring billing for digital commerce in Drupal Commerce.

Categories: FLOSS Project Planets

Machinalis: Adding PostgreSQL support to Enthought&amp;#39;s Canopy on OS X

Planet Python - Wed, 2014-07-16 08:03

Although Enthought’s Canopy Python distribution for OS X is fairly complete, it doesn’t have the psycopg2 module and it is not available on the community repository.

Before you start

Some readers pointed out that PostgreSQL might already be installed, so before you do anything, check whether ‘’pg_config’’ is present in your system. If it is, you can just skip the next section, as you only need to install ‘’psycopg2’’ using pip. (Thanks RobinD63!)

Installing dependencies

Given that Canopy is a full fledged Python environment, we could try to install the module using pip, but before we can do that we need to make sure that we have Xcode’s command line utilities installed. The easiest way to do this, is to install Homebrew. Use this command, and follow the instructions on the screen:

ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"

If try to install psycopg2 with pip now, it’ll fail with the following message:

Error: pg_config executable not found

This is because psycopg2 depends on PostgreSQL client tools. Sadly, these aren’t available as a separate package, so we need to install a PostgreSQL server just to have access to the pg_config executable. We cannot use Homebrew to install PostgreSQL because it is not compatible with Canopy’s Python environment, and Python is required to build PostgreSQL.

In order to install a full PostgreSQL distribution you can use EnterpriseDB’s PostgreSQL graphical installer. All you need to do is follow the instructions on screen, unless you’re planning on actually using the server and not just the client libraries, in which case, pay attention to the configuration dialogs and remember the passwords you set for the ‘’postgres’’ user.

If you really don’t need a PostgreSQL server running, you can just install Postgres.app.

Installing psycopg2

Once PostgreSQL is installed, we can build the psycopg2 module. If you chose the EntrerpiseDB installer use:

$ PATH=$PATH:/Library/PostgreSQL/9.3/bin/ pip install psycopg2

or:

$ PATH=$PATH:/Applications/Postgres.app/Contents/Versions/9.3/bin pip install psycopg2

if you used Postgres.app. In either case, if all goes well you’ll see the following message:

Successfully installed psycopg2 Cleaning up...

Let’s see if the module works properly. Open a python (or an ipython) shell, and import the psycopg2 module:

In [1]: import psycopg2 --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-1-bd284aa2cf56> in <module>() ----> 1 import psycopg2 /Users/guest/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/psycopg2/__init__.py in <module>() 48 # Import the DBAPI-2.0 stuff into top-level module. 49 ---> 50 from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID 51 52 from psycopg2._psycopg import Binary, Date, Time, Timestamp ImportError: dlopen(/Users/guest/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: libssl.1.0.0.dylib Referenced from: /Users/guest/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/psycopg2/_psycopg.so Reason: image not found

The module is failing to import because it depends on OpenSSL. In order to install it, we’re going to use brew:

$ brew install openssl $ brew link -f openssl

You can safely ignore any warning messages brew gives you. Let’s try to import psycopg2 once again:

In [1]: import psycopg2 In [2]:

And with that, we added support for PostgreSQL to Enthought’s Canopy Python distribution on OS X.

Categories: FLOSS Project Planets

Omaha Python Users Group: July Meeting

Planet Python - Wed, 2014-07-16 08:00

Date: Wednesday, July 16, 2014.

LocationFox Hollow Coffee, 1919 Papillion Pkwy (approx. 113th and Blondo)

Time: 7pm

Topics:

  • Django-taggit
  • What database to use for production site?
  • add yours here.
Categories: FLOSS Project Planets

"Menno's Musings": Inbox is sponsoring IMAPClient development

Planet Python - Wed, 2014-07-16 07:10

Now that they have officially launched I can happily announce that the good folks at Inbox are sponsoring the development of certain features and fixes for IMAPClient. Inbox have just released the initial version of their open source email sync engine which provides a clean REST API for dealing with email - hiding all the archaic intricacies of protocols such as IMAP and MIME. IMAPClient is used by the Inbox engine to interact with IMAP servers.

The sponsorship of IMAPClient by Inbox will help to increase the speed of IMAPClient development and all improvements will be open source, feeding directly in to trunk so that all IMAPClient users benefit. Thanks Inbox!

The first request from Inbox is to fix some unicode/bytes handling issues that crept in as part of the addition of Python 3 support. It's a non-trivial amount of work but things are going well. Watch this space...

Categories: FLOSS Project Planets

Vincent Sanders

Planet Debian - Wed, 2014-07-16 07:04
It is no great secret that my colleagues at Collabora have been doing work with the Raspberry Pi Foundation.

My desk is very near Marco and I often see him working with the various Pi boards. Recently he obtained one of the new B+ units for testing and I thought it looked a little sad sat naked on his desk.

To remedy this bare board problem I designed and built a laser cut a case for him and now the B+ has been publicly released I can make the design freely available.

The design is completely original though is inspired by several other plastic "clip" type designs I have seen. Originally I created and debugged the case design for my parallella though tweaking it for the Pi was pretty easy.

The design is under a CC attribution licence and I ought to say that my employer is in no way responsible for this, its all my own fault.
Categories: FLOSS Project Planets

Wouter Verhelst: Reprepro for RPM

Planet Debian - Wed, 2014-07-16 06:43

Dear lazyweb,

reprepro is a great tool. I hand it some configuration and a bunch of packages, and it creates the necessary directory structure, moves the packages to the right location, and generates a (signed) Debian package repository. Obviously it would be possible to all that reprepro does by hand—by calling things like cp and dpkg-scanpackages and gpg and other things by hand—but it's easy to forget a step when doing so, and having a tool that just does things for me is wonderful. The fact that it does so only on request (i.e., when I know something has changed, rather than "once every so often") is also quite useful.

At work, I currently need to maintain a bunch of package repositories. The Debian package archives there are maintained with reprepro, but I currently maintain the RPM archives pretty much by hand: create the correct directories, copy the right files to the right places, run createrepo over the correct directories (and in the case of the OpenSUSE repository, also run gpg), and a bunch of other things specific to our local installation. As if to prove my above point, apparently I forgot to do a few things there, meaning, some of the RPM repositories didn't actually work correctly, and my testing didn't catch on.

Which makes me wonder how RPM package repositories are usually maintained. When one needs to maintain just a bunch of packages for a number of servers, well, running createrepo manually isn't too much of a problem. When it gets beyond own systems, however, and when you need to support multiple builds for multiple versions of multiple distributions, having to maintain all those repositories by hand is probably not the best idea.

So, dear lazyweb: how do large RPM repositories maintain state of the packages, the distributions they belong to, and similar things?

Please don't say "custom scripts"

Categories: FLOSS Project Planets

Juliana Louback: Become an Open-source Contributor Video Conference

Planet Debian - Wed, 2014-07-16 06:06

A couple weeks ago I was contacted by Yehuda Korotkin through one of the Debian mailing lists. Yehuda is a tech professor at one of Israel’s leading colleges for women. He proposed a video conference to present the open source community to his class and explain how they can contribute to open source projects. I volunteered to participate and last Monday (July 14th 2014) we had our virtual meeting, which I hope was the first of many.

Shauna G. also participated from Boston, making it a Israel - USA - Brazil meetup. Pretty impressive, eh? The conference was hosted on Google Hangouts, the video is available for viewing here. Run time is around 2.5 hours so to save you time I’ve kindly posted a summary of the conference! To be frank, I look ghastly half the time so I’d much prefer that you read this post.

First there was a Q&A period which was more of a discussion panel, followed by a hands-on session where the girls made their first contribution to an open source project. Here’s the gist of it:

Q&A

  • What is open source [software]? Software that allows free access to its source code (ergo ‘open’), permitting analysis and any modifications to the original code. All these permissions are contained in a license included in the software. Note that the term ‘open source’ is now applied to a lot of things other than software, and I assume there’s a different definition for those cases.

  • Who is the community behind open source software? Shauna pointed out that there are many kinds of projects and many kinds of communities backing them. As an example of a for-profit model, Shauna cited RedHat that offers Linux (an open source OS) for free and profits from support services. Some projects are supported by NGO’s (example: Center for Open Science) with a specific objective, a series of other projects are volunteer-based or are the result of a hobby project. The contributors vary accordingly, they are not only programmers but also people with a non-technical background. In sum, there’s a lot of variety in the open source community as a whole, details of sturcture vary case to case.

  • Why is it important that women get involved in technology? Men and women are different because of a series of reasons, among them our biological composition, influences of society and life experiences. Technology is for everyone, male or female and we need to be able to design products that will suit everyone well. If the development team is comprised of only one sex or another, it’s likely that factors will be overlooked or ignored and as a result not a democratic experience. Shauna gave some examples of this, such as a voice-powered location search software that could easily identify stereotypically ‘male’ keywords but not many of interest to females.

    In addition to this, it’s known that there is a severe workforce deficiency in the tech industry. One proposed solution is to encourage more women to follow a career in tech. The argument is based on the assumption that any boy who is remotely interested in tech has a high likelihood of studying tech, but not every girl. Often girls are not encouraged to stuy STEM-related topics and in many places tech is considered inappropriate for women. I’ve read that tech companies’ engineering team is 12% female and 88% male. That number is consistent with the stats of female college grads for STEM majors. I think boys who love tech will keep getting into tech, but there are a lot of girls who could be great engineers if they only gave it a shot. Women should be encouraged to pursue a tech career not only so they’ll get to work in such an incredible field but also so they can help assuage the gap in supply and demand present in the tech workforce today.

  • Why is open source important? We don’t always get everything we want/need from off-the-shelf software. The great thing about open source is that you are free to tweak the at will and add whatever features you’d like. Sometimes there are even more important necessities, such as security requirements. With open source you don’t need to just take the manufacturers’ word that the code is bullet proof; open it and check it out yourself. Another factor that increases the value of open source is its stability. As Linus Torvalds said, “given enough eyeballs, all bugs are shallow”. With so many developers opening, studying and testing the code, bugs are quickly identified and removed. The fact that open source software is ‘free’ doesn’t reflect negatively on the quality; to the contrary, plenty of open source software is the best that’s out there.

  • Why its important contribute to open source? To start, it’s not written anywhere that you have to contribute. That said, it sure is nice if you do. I think there are two main reasons why people contribute. One is they want to ‘give back’ to the open source community. The other is they want to add some missing detail, feature or entire product. Of course, usually if you need something, someone else does as well, then by supplying your own need you end up helping many people out on the way. But one thing that I think applies especially to students is it’s a great way to gain experience. You have to fit in some practical learning along with all that theory and one can only be so enthusiastic about the CS projects done in school that are usually discarded at the end of the semester. By contributing to an open source project, you are doing something that’s real and that people will actually use. You also have access to an incredible network of mentors who are real pros and more than willing to help you out. Maybe I’ve just been lucky, but they are really nice people who won’t snicker if you ask a stupid question. Well, maybe they do. But I haven’t seen it happen and I’m the queen of stupid questions. Many times this is their pet project, so they’re thrilled to have your contribution. Another great thing is that you can choose what you want to work with. It’s easy to find a tech internship out there, what’s not easy is finding an interesting internship. You might work with something you dislike or (gasp) end up just bringing people coffee. But we do that stuff cause we need the work experience. With open source you have a huge array of projects you can work on, one of them is certain to suit your interests.

  • How to become a contributor? When using an open source software, you might notice things you’d like to change, or actual bugs. Let the development team know about it and be sure to also let them know if you think you can figure out the fix. And if that hasn’t happened yet, try exploring Github’s open source repositories and check out the ‘Issues’ option (upper right corner). There’s usually a list of bugs or wishlist features that you just might be the man/woman for.

Hands on

For the practical part of our conference, Yehuda’s students contributed to Jscommunicator’s Issue #5 which requests i18n support for major languages. JSCommunicator is a SIP communication tool developed in HTML and JavaScript, currently supporting audio/video calls and SIP SIMPLE instant messaging.

First we set up Github in their local machines and forked the Jscommunicator repo. We learned the basic git commands needed to push changes to their remote repositories and to create a pull requests. This is explained in further detail here.

At the end of the conference with nearly an hour overtime due to my tendency to be overly verbose, Yehuda’s class made their first contribution with a Hebrew translation for Jscommunicator!

All in all, it was great to virtually meet Yehuda and his epic students who all seemed very clever engineers. I’m looking forward to our next meetup once I gain some time-management skills.

Categories: FLOSS Project Planets

Edward J. Yoon: 쓸모없는 상자

Planet Apache - Wed, 2014-07-16 05:52


간만에 놀랄만한 물건을 발견 +_+;
Categories: FLOSS Project Planets

It all comes together: no more Software Compilation but more KDE!

Planet KDE - Wed, 2014-07-16 05:43
KDE 4.0 demo in Dresden, 2007 (short hair time, yes) With the KDE 4.0 release we had the issue that everything was one big blob: the libraries, the desktop and the applications, all inter-dependent.

Back then, at the end of 2007, the libraries and many of the applications were in a very good shape. Especially the KDE Edu applications I remember: they were stable, pretty and awesome for many months already before the release and their developers were itching to get their code to users. I had made a blog post with cool video's of KDE edu apps in October 2007. Here is Kalzium at that point:



Unfortunately, the desktop, having undergone a HUGE rewrite, was not at that same level of quality. As I wrote later that Month:
"When I show people the state of Plasma, they're like "hmm, that's not good". So I then proceed to show the Edu and Games, cheers them right up."
But the last release of KDE had been in 2005 (!!) and after more than two years, we really wanted the new and improved apps to get out to users. The desktop was basically workable so we decided to release. Code that is not in users hands rots away...

We all know how that went - distributions shipped it as default and the internet erupted with hate.
Doing better nowSo, for the 5 series, we split it all up: Frameworks 5.0 (the new name of our modularized libraries) was released last week, the desktop came out yesterday and the Applications still mostly have to start moving to Qt5/Frameworks 5... We weren't forced to release half-baked stuff but everything came 'when done'.

KDE is now People. And dragons.Better separation: rebrandingThat was possible because we rebranded 'KDE' to mean community in November of 2009. This created (over time...) separate brands  for 'Plasma', 'Applications' and 'Platform' (now 'Frameworks') which could release on their own. Could being the operative term here, as we kept releasing it together.

That created quite some branding confusion, also because we had not thought through all the issues we would bump into. So when we finally decoupled releases at the release of Plasma 4.11 (the latest release in the 4.x series) and the KDE Platform at 4.9 (although that got some serious updates since then and has kept increasing version number for packaging convenience), it was largely ignored.

Which in turn created some confusion when Frameworks 5.0 came out - several people asked 'where can I get KDE 5', expecting to run the desktop and applications already. Well, I'm quite OK with users saying 'I use KDE' as long as they mean Plasma and realize there is more to KDE than the desktop. Because when I say I use Microsoft, I am not lying. I've always been a huge fan... of their keyboards. Not joking, their operating system wasn't great last time I used it but I love the 'comfort curve' series of keyboards. They should stick to hardware, clearly their strongest point.

In hindsight, we probably better would have waited until there was a real need for the rebranding - like today, or in 2013, when Plasma stopped releasing. Then again, we did expect that to come soon - there already had been talks about disconnecting the releases, it just didn't happen. Well, hindsight is always 20-20, they say...

And as the title points out - there is no more need for the 'Software Compilation' term, which was invented to solve the confusion of 'KDE releases three separate things but all at once'. We no longer release the Applications, Desktop and Libraries at once...

Better communicationAnother thing we changed is our communication. In the KDE 4 times, what we did was PROMO: being as positive as one could be ;-)

Since then, we've learned a little actual marketing. Perception management and fancy stuff like that. Including properly explaining what something is and is not ready for! That's why I wrote a known issues section for the Beta of Plasma including:

"With a substantial new toolkit stack below come exciting new crashes and problems that need time to be shaken out."
With a clear section on where we stand in the final release announcement on the dot (See Suitability and Updates on the bottom of the article) we have made clear what the state is - and that we don't think distributions should ship Plasma 5.0 as default. And distributions have picked up on this - at least neither Kubuntu nor openSUSE will ship their upcoming release based on Plasma 5.0.


ForwardNow, the future. Plasma 5.0 is out, and on a 3-month release cycle. Frameworks comes with a new version every month, the Applications are still at 4.13 with a beta of 4.14 out last week. After 4.14 is out, the work on a Frameworks 5/Qt 5 port will commence full-steam but some applications will be ready before others. How will we deal with that conundrum? I don't know yet. There might be two KDE Application releases for a while, as some applications will take longer to get ported than others.

But we're better positioned than ever to bring innovations to the Linux Desktop. So let's see what the future brings!

(Ok, I do have SOME thoughts on where KDE, is going, see Part 1 and Part 2)
Categories: FLOSS Project Planets
Syndicate content