FLOSS Project Planets

gzip @ Savannah: gzip-1.8 released [stable]

GNU Planet! - Tue, 2016-04-26 17:28
This is to announce gzip-1.8, a stable release. There have been 6 commits by 2 people in the 4 weeks since 1.7. See the NEWS below for a brief summary. Thanks to everyone who has contributed! The following people contributed changes to this release: Jim Meyering (3) Paul Eggert (3) Jim [on behalf of the gzip maintainers] ============================================================= Here is the GNU gzip home page: http://gnu.org/s/gzip/ For a summary of changes and contributors, see: http://git.sv.gnu.org/gitweb/?p=gzip.git;a=shortlog;h=v1.8 or run this command from a git-cloned gzip directory: git shortlog v1.7..v1.8 To summarize the 35 gnulib-related changes, run these commands from a git-cloned gzip directory: git checkout v1.8 git submodule summary v1.7 ============================================================= Here are the compressed sources: http://ftp.gnu.org/gnu/gzip/gzip-1.8.tar.gz (1.1MB) http://ftp.gnu.org/gnu/gzip/gzip-1.8.tar.xz (712KB) Here are the GPG detached signatures[*]: http://ftp.gnu.org/gnu/gzip/gzip-1.8.tar.gz.sig http://ftp.gnu.org/gnu/gzip/gzip-1.8.tar.xz.sig Use a mirror for higher download bandwidth: http://www.gnu.org/order/ftp.html [*] Use a .sig file to verify that the corresponding file (without the .sig suffix) is intact. First, be sure to download both the .sig file and the corresponding tarball. Then, run a command like this: gpg --verify gzip-1.8.tar.gz.sig If that command fails because you don't have the required public key, then run this command to import it: gpg --keyserver keys.gnupg.net --recv-keys 7FD9FCCB000BEEEE and rerun the 'gpg --verify' command. This release was bootstrapped with the following tools: Autoconf 2.69.147-5ad35 Automake 1.99a Gnulib v0.1-761-gd92a0d9 ============================================================= NEWS * Noteworthy changes in release 1.8 (2016-04-26) [stable] ** Bug fixes gzip -l no longer falsely reports a write error when writing to a pipe. [bug introduced in gzip-1.7] Port to Oracle Solaris Studio 12 on x86-64. [bug present since at least gzip-1.2.4] When configuring gzip, ./configure DEFS='...-DNO_ASM...' now suppresses assembler again. [bug introduced in gzip-1.3.5]
Categories: FLOSS Project Planets

Evolving Web: Improving Drupal Speed with blackfire.io (Part 1)

Planet Drupal - Tue, 2016-04-26 17:21

Drupal core is pretty well optimized. But after you've finished building your Drupal 7 or 8 site, you might find some pages are loading slower than you'd like. That's not surprising—you've probably enabled scores of contrib modules, written custom code, and are running over 100 SQL queries per uncached request.

read more
Categories: FLOSS Project Planets

Aten Design Group: 404 Not Found: The Monster Under Your Bed

Planet Drupal - Tue, 2016-04-26 16:51

If you are working on a website redesign, 404s are the very real monsters under your bed. Ignore them, and they will wreak havoc on your website’s traffic. Worst of all, by the time you realize what’s happening it may already be too late.

What are 404s?

Very simply, 404s are broken links. More specifically, 404 is the HTTP response code for “Not Found,” signifying that a web page is not available at the provided URL. Reorganizing old content, changing old URLs and selectively discarding content that is no longer relevant are all common activities during website redesign projects that can result in 404s.

Why 404s Are so Bad

Your legacy content – the stuff that’s been around for 15 years, from the most up-to-date research articles, to blog posts written by employees long-gone, to PDF files in random folders off your webroot – has been quietly growing your website traffic, catching inbound links and increasing effectiveness of organic search. And the longer it has been around, the more valuable it has likely become, even if the content itself is no longer of much relevance to your organization. A quick scan of your Google Analytics will likely confirm this. Your organic search traffic probably has a very long tail: thousands or tens of thousands of pages with a few hits each, funneling users to your website.

If those URLs change, or that content is abandoned entirely, the potentially massive net you have been casting – and growing – for years will be damaged. Despite the very best user experience, the most on-target messaging and the most compelling design, years of search engine optimization (SEO) progress can be lost – all because of 404s. Your organic search rank will drop as search engines remove the now-broken URLs from their indexes. As a result, traffic will plummet. All of this can very quickly bring the success of your entire redesign project into question.

In website redesigns, 404s may very well be your worst enemy.

Combatting 404s Starts with Content Strategy

Dealing with 404s is an important, often overlooked component of effective content strategy. Communications teams frequently devote significant time to performing content audits, flagging content to be be reorganized, rewritten or abandoned altogether. Far less time – if any – is given to thinking through exactly what to do with content that is left behind. It is simply abandoned. Soon after launch, someone in marketing notices a drop in traffic and suddenly 404s are on everyone’s radar.

By Default, Keep Everything

When redesigning a website, we recommend keeping just about everything. That might be opposite of what you’ve heard before. It doesn’t lend itself to the “cleaning out the garage” or “moving to a new house” metaphors. In reality, though, your legacy content is one of your greatest assets. That junk in the garage is gold. Deal with it, but don’t abandon it.

For outdated content, channel users to more relevant offerings with good user experience design and carefully crafted messaging. Old content – even if outdated – represents an opportunity to connect with users you otherwise might miss entirely, communicating key changes in your organization or pointing to relevant, up-to-date resources. Again, dealing with legacy content is an important element of content strategy. It deserves design attention and good user experience. Craft a simple message that says “This resource is out of date. To see our more recent work in this area, see X, Y or Z.”

For content that is rewritten or moved to a new URL, use 301 redirects to redirect users automatically from old pages to their new equivalents. 301 redirects, or 301s, signal to search engines that a resource has not been eliminated; rather, it has been “Moved Permanently” and should be reindexed at its new location. 301s are hands-down the most important technical device for dealing with 404s.

(Note that 301s do not guarantee that your content will maintain its rank within search results. Rather, 301s indicate to search engines that the resource for a particular URL has been moved. Search engines will queue the new URL for reindexing, and search rank will once again be determined by a broad spectrum of factors like keyword density, page title, inbound links, etc.)

Add 301 Redirects for All Migrated Content

When migrating legacy content into your redesigned website, add a 301 redirect for every single resource, article or page being migrated. As of right now in Drupal 7, a patch for the redirect module makes this process easy: simply map the old URL to the special destination field “migrate_redirects” and the redirect module will take care of the rest.

In Drupal 8, the redirect module provides built-in support for migrating redirects from older versions of the Content Management System. A little bit of custom code in your scripted migration can take care of adding redirects for migrated nodes. (Need more info? Let us know in the comments or get in touch.)

Find and Prioritize All Legacy URLs

While adding 301 redirects for every migrated page is critical, it is not enough. Google has likely indexed large numbers of URLs for content that will not be included in your scripted migration process. Landing pages, listing pages, PDFs and anything you have specifically decided not to migrate will be omitted if your focus is solely on individual articles. To better understand the full scope of URLs that need to be dealt with, download a report of all pages from Google Analytics or whatever analytics platform you are using. This not only provides a thorough catalog of web pages, PDFs and other resources being viewed, but also shows a count of monthly page views and is incredibly helpful for establishing priority for specific pages to be redirected. Remember, your traffic has a long tail; the potentially thousands of pages that receive one or two views per month are still important.

Test All Legacy URLs In Your Redesigned Website

Once you have a list of all legacy URLs you need to test your new, redesigned website to see which URLs are resulting in “404 Not Found” errors. We have a few custom scripts that do exactly that, written in environments ranging from Drupal modules to standalone NodeJS apps. Regardless of the specific implementation, the script needs to do the following:

  1. Import a list of legacy URLs downloaded from your analytics service.
  2. Loop through the list of URLs and test each on the new website to see what HTTP status code is returned.
  3. If a 301, 302 or other redirect is returned, follow it to ensure it eventually results in a URL with an acceptable status (200 OK).
  4. Generate a report of returned status codes. We typically include page views from the originally downloaded analytics report in this CSV so we can see the status code directly beside the number of monthly views for each URL. Seeing the HTTP status code, URL and number of pageviews all side-by-side in spreadsheet format is incredibly helpful for gauging priority.

The first time you run your script, you will likely see a very high volume of 404s. That’s fantastic: you’re seeing them now, during the redesign, before they are anywhere close to impacting SEO or traffic.

Fix the 404s

Your report of returned status codes provides a prioritized list of 404s that need to be dealt with. You will likely see a mix of landing pages, listing pages, articles, PDF files and other resources. Each URL needs to be dealt with.

Often, large numbers of similar URLs can be redirected programmatically – that is, by matching patterns rather than specific addresses. For example, a collection of folders containing PDFs may have been moved to new locations. Or URLs for pages that show content by category may need to be mapped to new category ids. Depending on the complexity of the specific redirect pattern and the environment in which your website is hosted, programmatic redirects can be added to Drupal in a variety of ways, as follows:

Remaining URLs will simply need a single, manual redirect added. The redirect and path redirect import modules are excellent resources for manually adding 301s.

Watch Out for Index.html

If your legacy URLs are directory indexes (i.e. ending with “index.html” or “index.htm”) you will need to add an additional redirect for the version that does not include the file name.

Example: if your legacy URL is “http://example.com/path/to/file/index.html” and the new equivalent is “http://example.com/new/path/to/file”, you will need two redirects:

  • One from “http://example.com/path/to/file/index.html” to the new URL
  • Another form “http://example.com/path/to/file” (without index.html) to the new URL

We typically add additional redirects for directory indexes once all other redirect work is finished, using a simple custom script that scans the redirects table for index pages and generates the appropriate equivalent.

Test Again, Rinse and Repeat

Once all 404s have been dealt with in the ways outlined above, test your redesigned website again. You will likely find a few URLs that still need to be addressed. Rinse and repeat until the entire list of prioritized pages returns the acceptable status code of 200.

Not Quite Done

And that’s it. Almost. The final piece to combatting 404s is to monitor them closely after launch. The redirect module provides a simple admin page for doing exactly that. We strongly recommend monitoring 404s for several days after launch and adding 301s wherever appropriate.

Sit Back and Relax

Website redesign projects usually impact organizations at all levels, and we know you probably won’t be able to truly sit back and relax after launch. There will be final communications details, stakeholder reviews, content updates, ongoing bug fixes and likely a growing list of next-phase wishlist items. That said, dealing with 404s will help protect your investment in organic search and mitigate deleterious effects on web site traffic. There will still be a dip in the numbers as Google and other search engines update their indexes and re-crawl new content. This post doesn’t address SEO strategy in-depth, nor setting specific traffic goals and benchmarks as a part of planning and discovery for your website redesign. It does express the very clear need to accommodate modified URLs and abandoned pages. Without an effective redirect strategy, 404s will almost certainly wreak havoc on your organic search traffic. Good content strategy and 301 redirects are critical allies for fighting 404s and protecting your years-long investment in SEO.

Categories: FLOSS Project Planets

Jonathan McDowell: Notes on Kodi + IR remotes

Planet Debian - Tue, 2016-04-26 16:32

This post is largely to remind myself of the details next time I hit something similar; I found bits of relevant information all over the place, but not in one single location.

I love Kodi. These days the Debian packages give me a nice out of the box experience that is easy to use. The problem comes in dealing with remote controls and making best use of the available buttons. In particular I want to upgrade the VDR setup my parents have to a more modern machine that’s capable of running Kodi. In this instance an AMD E350 nettop, which isn’t recent but does have sufficient hardware acceleration of video decoding to do the job. Plus it has a built in fintek CIR setup.

First step was finding a decent remote. The fintek is a proper IR receiver supported by the in-kernel decoding options, so I had a lot of flexibility. As it happened I ended up with a surplus to requirements Virgin V Box HD remote (URC174000-04R01). This has the advantage of looking exactly like a STB remote, because it is one.

Pointed it at the box, saw that the fintek_cir module was already installed and fired up irrecord. Failed to get it to actually record properly. Googled lots. Found ir-keytable. Fired up ir-keytable -t and managed to get sensible output with the RC-5 decoder. Used irrecord -l to get a list of valid button names and proceed to construct a vboxhd file which I dropped in /etc/rc_keymaps/. I then added a

fintek-cir * vboxhd

line to /etc/rc_maps.cfg to force my new keymap to be loaded on boot.

That got my remote working, but then came the issue of dealing with the fact that some keys worked fine in Kodi and others didn’t. This seems to be an issue with scancodes above 0xff. I could have remapped the remote not to use any of these, but instead I went down the inputlirc approach (which is already in use on the existing VDR box).

For this I needed a stable device file to point it at; the /dev/input/eventN file wasn’t stable and as a platform device it didn’t end up with a useful entry in /dev/input/by-id. A ‘quick’

udevadm info -a -p $(udevadm info -q path -n /dev/input/eventN)

provided me with the PNP id (FIT0002) allowing me to create /etc/udev/rules.d/70-remote-control.rules containing

KERNEL=="event*",ATTRS{id}=="FIT0002",SYMLINK="input/remote"

Bingo, a /dev/input/remote symlink. /etc/defaults/inputlirc ended up containing:

EVENTS="/dev/input/remote" OPTIONS="-g -m 0"

The options tell it to grab the device for its own exclusive use, and to take all scancodes rather than letting the keyboard ones through to the normal keyboard layer. I didn’t want anything other than things specifically configured to use the remote to get the key presses.

At this point Kodi refused to actually do anything with the key presses. Looking at ~kodi/.kodi/temp/kodi.log I could see them getting seen, but not understood. Further searching led me to construct an Lircmap.xml - in particular the piece I needed was the <remote device="/dev/input/remote"> bit. The existing /usr/share/kodi/system/Lircmap.xml provided a good starting point for what I wanted and I dropped my generated file in ~kodi/.kodi/userdata/.

(Sadly it turns out I got lucky with the remote; it seems to be using the RC-5x variant which was broken in 3.17; works fine with the 3.16 kernel in Debian 8 (jessie) but nothing later. I’ve narrowed down the offending commit and raised #117221.)

Helpful pages included:

Categories: FLOSS Project Planets

Pau Garcia i Quiles: Is KDE the right place for Thunderbird?

Planet Debian - Tue, 2016-04-26 15:45

For years, Mozilla has been saying they are no longer focused on Thunderbird and its place is outside of Mozilla. Now it seems they are going to act on what they said: Mozilla seeks new home for e-mail client Thunderbird.

The candidates they are exploring are the Software Freedom Conservancy, The Document Foundation, and I expect at least the Apache Software Foundation to be a serious candidate, and Gnome to propose.

Some voices in KDE say we should also propose the KDE eV as a candidate hosting organization.

What follows is my opinion, not the official opinion of the eV or the board’s, or the KDE Community’s opinion. Take it with a grain (or more) of salt.

I am not so sure. I am trying to think what the KDE eV can offer to Mozilla to be appealing to them and if my analysis is correct, we are too far and Thunderbird would pose many risks to the other projects in KDE.

(I am blurring the lines between “KDE eV”, “KDE community”, “KDE Frameworks”, etc as it has no relevance for the discussion)

Thunderbird is an open source project/product with a lot of commercial users and has (still has?) many paid contributors.

IMHO what Mozilla is looking for is an organization with a well-oiled funding machine, able to campaign for money (even if in a tight circle, something like ours Patron program), and accept and process funds in a way that directly benefits Thunderbird. I e. hiring developers to implement X or Y, or work on some area full-time, or at least, half-time.

KDE does not work like that.

KDE has few commercial users (other than distros, if you want to count them as commercial users). Other than Blue Systems, I don’t think we have any developer working for KDE.

Also, the eV is not exactly a well-oiled funding machine. We have been talking about that for years. And we do not hire developers directly to work on X or Y (at most, we pay for part of the expenses of sprints).

All of that makes me think we are not the right host for Thunderbird.

But it does not stop there!

Let’s say Thunderbird comes to KDE and suddenly we are offered USD 1 M from several organizations who want to be “Patrons of Thunderbird”, or influence Thunderbird, or whatever.

First problem: do we allow funds to go to a specific project rather than the eV deciding how to distribute them? AFAIK we do not allow that and at least one KDE sub-project has had trouble with that in the past.

Then there is the thing about “Patrons of Thunderbird”: no such thing. Either you are a Patron of KDE, including Plasma Mobile, OwnCloud, and whatnot, or you are nothing. You cannot be a “Patron of Partial KDE, namely Thunderbird”.

Influencing, did I say? The eV is by its own rules not an influencer on KDE’s direction, just an entity to provide legal and economic support. Quite the opposite from what Mozilla does today for Thunderbird.

Even if funders would not mind all that, there is the huge risk this poses for all the other projects. With as little as USD 200K donated towards Thunderbird (and USD 200K is not much for a product with so many commercial users, which means a healthy ecosystem of companies making money on support, development, etc, and thus donating to somehow influence or be perceived as important players), Thunderbird becomes the most important project in KDE. How would we manage this? In any sensible organization, Thunderbird would become the main focus and all the other KDE projects would be relegated. Even if we decide not to, external PR would make that look like it happened.

For all those reasons, I think KDE is not the right place for Thunderbird at the moment. It would require a big change in what the eV can do and how it operates. And that change may be for good but it’s not there now and it will not be by the time Mozilla has to decide if KDE is the right place.

All that, and I have not even talked about technology and what any sensible Thunderbird “customer” would think today: what is the medium and long-term roadmap? Migrate Thunderbird users to Kontact/KDE PIM? Port Thunderbird to Qt + KF5, maybe including moving to QtWebEngine? Will Windows support be deteriorated by that change? Or maybe the plan is to cancel KMail and Akregator? Those are second-thoughts, unimportant right now.

Update If you want to contribute to the discussion, please join the KDE-Community mailing list.

 

Categories: FLOSS Project Planets

Three Slots Awarded to Krita for Google Summer of Code

Planet KDE - Tue, 2016-04-26 15:34

Every year Google puts on a program called Google Summer of Code (GSoC). Students from all over the world try to obtain an internship where they can be paid to work on an open source application. This year we are lucky enough to have had three students accepted into the program! (Who gets accepted depends on how many applications there are, how many slots Google has and how many get distributed to KDE.) These three students will be working on Krita for the summer to improve three import areas in Krita.

Here is what they will be trying to tackle in the coming months.

  1. Jouni Pentikäinen – GSoC Project Overview – “This project aims to bring Krita’s animation features to more types of layers and masks, as well as provide means to generate certain types of interpolated frames and extend the user interface to accommodate these features.” In short, Jouni is going to work on animating opacity, filter layers and maybe even transform masks. Not just that, but he’ll work on a sexy curve time-line element for controlling the interpolation!
  2. Wolthera van Hövell tot Westerflier   – GSoC  Project Overview – “Currently, Krita’s architecture has all the bells and whistles for wide-gamut editing. Two big items are missing: Softproofing and a good internal colour selector dialogue for selecting colours that are outside of the sRGB colour space.” Wolthera’s work will make illustration for print workflows much smoother, letting you preview how likely your RGB image will keep at it’s details when printed out. Furthermore, she’ll work on improving your ability to use filters correctly on wide gamut files, extending Krita’s powerful color core.
  3. Julian Thijsen – GSoC Project Overview –  “I aim to seek out the reliance on legacy functionality in the OpenGL engine that powers the QPainter class and to convert this functionality to work using OpenGL 3.2 Core Profile — it needs the compatibility profile at the moment. This will enable OSX to display decorations and will likely allow Krita to run on Mac OS X computers.” This one is best described as a “OpenGL canvs by-pass operation”, Krita currently uses OpenGL 2.1 and 3.0. To run on OSX, we’ll need to be able to run everything in OpenGL 3.0 at the least. It is the biggest blocker for full OSX support, and we’re really excited Nimmy decided to take the challenge!

The descriptions might sound a bit technical for a lay person, but these enhancements will make a big impact. We congratulate the accepted students and wish them the best of luck this summer.

Categories: FLOSS Project Planets

Niels Thykier: Putting Debian packages in labelled boxes

Planet Debian - Tue, 2016-04-26 15:17

Lintian 2.5.44 was released the other day and (to most) the most significant bug fix was probably that Lintian learned about Policy 3.9.8.  I would like to thank Axel Beckert for doing that.  Notably it also made me update the test suite so to make future policy releases less painful.

For others, it might be the fact that Lintian now accepts (valid) versioned provides (which seemed prudent now that Britney accepts them as well).  Newcomers might appreciate that we are giving a much more sensible warning when they have extra spaces in their changelog “sign off” line (rather than pretending it is an improper NMU).  But I digress…

 

What I am here to talk about is that Lintian 2.5.44 started classifying packages based on various “facts” or “properties”, we can determine.  Therefore:

  • Every package will have at least one tag now!
  • These labels are known as something called “classification tags”.
  • The tags are not issues to be fixed!  (I will repeat this later to ensure you get this point!)

Here are some of the “labelled boxes” your packages will be put into[0]:

The tags themselves are (as mentioned) mere classifications and their primary purpose is to classify or measure certain properties.  With them any body can download the data set and come with some bold statement about Debian packages (hopefully without relying too much on “lies, damned lies and statistics“).  Lets try that immediately!

  • Almost 75% of all Debian packages do not need to run arbitrary code doing installation[2]!
  • The “dh-sequencer” with cdbs is the future![3]

In the next release, we will also add tracking of auto-generated snippets from dh_*-tools.  Currently unversioned, but I hope to add versioning to that so we can find and rebuild packages that have been built with buggy autoscripts (like #788098)

If you want to see the classification tags for your package, please run lintian with like this:

# Add classification tags $ lintian -L +classification <pkg-or-changes> # Or if you want only classification tags$ lintian -L =classification <pkg-or-changes>

Please keep in mind that classification tags (“C”) are not issues in themselves. Lintian is simply attempting to add a visible indicator about a given “fact” or “property” in the package – nothing more, nothing less.

 

Future work – help (read: patches) welcome:

 

[0] Mind you, the reporting framework’s handling of these tags could certainly be improved.

[1] Please note how it distinguishes 1.0 into native and non-native based on whether the package has a diff.gz.  Presumably that can be exploited somehow …

[2] Disclaimer: At the time of writing, only ~80% of the archive have been processed.  This is computed as: NS / (NS + WS), where NS and WS are the number of unique packages with the tags “no-ctrl-scripts” and “ctrl-script” respectively.

[3] … or maybe not, but we got two packages classified as using both CDBS and the dh-sequencer.  I have not looked at it in detail. For the curious: libmecab-java and ctioga2.


Filed under: Debian, Lintian
Categories: FLOSS Project Planets

DrupalCon News: Come celebrate community in the exhibit hall

Planet Drupal - Tue, 2016-04-26 15:15

This DrupalCon we're cranking up the community exposure in the exhibit hall.

Categories: FLOSS Project Planets

OSTraining: How to Change the jQuery Version in Drupal 7

Planet Drupal - Tue, 2016-04-26 13:24

One of our OSTraining members asked about changing JQuery, so we created this tutorial for him.

Below is quick guide to installing Drupal's Jquery plugin module.

Categories: FLOSS Project Planets

Acquia Developer Center Blog: Drupal 8 Module of the Week: Responsive and off-canvas menu

Planet Drupal - Tue, 2016-04-26 13:03

Each day, more Drupal 7 modules are being migrated over to Drupal 8 and new ones are being created for the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some of the most most prominent, useful modules, projects, and tools available for Drupal 8. This week an interesting mobile-usability helper: Responsive and off-canvas menu.

Tags: acquia drupal planetMotWdrupal 8D8UXmobilemenu
Categories: FLOSS Project Planets

Python Engineering at Microsoft: Using CPython’s Embeddable Zip File

Planet Python - Tue, 2016-04-26 13:00

On the download page for CPython 3.5.1, you’ll see a wide range of options. Not all of these are well explained, especially for Windows users who have seven (seven!) choices.

Let me restructure the Windows items into a more feature-focused table:

Installer Initial download size Installer requires internet? Compatibility x86 web-based installer Very small Yes Windows 32-bit and 64-bit x64 web-based installer Very small Yes Windows 64-bit only x86 executable installer Large (30MB) Only for debug options Windows 32-bit and 64-bit x64 executable installer Large (30MB) Only for debug options Windows 64-bit only x86 embeddable zip file Moderate (7MB) N/A (there is no installer) Windows 32-bit and 64-bit x64 embeddable zip file Moderate (7MB) N/A (there is no installer) Windows 64-bit only

As is fairly common with installers these days, you have the choice to download everything in advance (the “executable installer”), or a downloader that will let you minimize the download size (the “web installer”). The latter allows you to select options before downloading the components, which can reduce the download size to around 8MB. (For those of us with fast, reliable internet access, this sounds irrelevant – for those of us tethering through a 3G mobile phone connection in the middle of nowhere, it’s a really huge saving!)

But what is the third option – the “embeddable zip file”? It looks like a reasonable download size and it doesn’t have any installer, so it seems quite attractive. However, the embeddable zip file is not actually a regular Python installation. It has a specific purpose and a narrow audience: developers who embed Python in their own native applications.

Why embed Python?

For many users, “Python” is the interactive shell that lets you type code and see immediate results. For others, it is an executable that can run .py files. While these are both true, in reality Python is itself a library that is used to interpreter code. Let’s look at the complete source code for python.exe on Windows:

#include "Python.h" #include int wmain(int argc, wchar_t **argv) { return Py_Main(argc, argv); }

That’s it! The entire purpose of python.exe is to call a function from python35.dll. Which means it is really easy to create a different executable that will run exactly what you want:

#include "Python.h" int wmain(int argc, wchar_t **argv) { wchar_t *myargs[3] = { argv[0], L"-m", L"myscript" }; return Py_Main(3, myargs); }

This version will ignore any command line arguments that are passed in, replacing them with an option to always start a particular script. If you give this executable its own name and icon, nobody ever has to know that you used Python at all!

But Python has a much more complete API than this. The official docs are the canonical source of information, but let’s look at a couple of example programs that you may find useful.

Executing a simple Python string

The short program above lets you substitute a different command line, but if you have a string constant you can also execute that. This example is based on the one provided in the docs.

#include "Python.h" int wmain(int argc, wchar_t *argv[]) { Py_SetProgramName(argv[0]); Py_Initialize(); PyRun_SimpleString("from time import time, ctime\n" "print('Today is', ctime(time()))\n"); Py_Finalize(); return 0; } Executing Python directly

Running a string that is predefined or dynamically generated may be useful enough, but the real power of hosting Python comes when you directly interact with the objects. However, this is also when code becomes incredibly complicated.

In almost every situation where it is possible to use Cython or CFFI to generate code for wrapping native objects and values, you should probably use them. However, while they’re great for embedding native code in Python, they aren’t as helpful (at time of writing) for embedding Python into your native code. If you want to allow users to automate your application with a Python script, you’ll need some way of importing the user’s script, and to provide Python functions to call back into your native code.

As an example of hosting Python directly, the program below replicates the one from above but uses direct calls into the Python interpreter rather than a script. (Note that there is no error checking in this sample, and you need a lot of error checking here.)

#include "Python.h" int wmain(int argc, wchar_t *argv[]) { PyObject *time_module, *time_func, *ctime_func; PyObject *time_value, *ctime_value, *text_value; wchar_t *text; Py_ssize_t cch; Py_SetProgramName(argv[0]); Py_Initialize(); // NOTE: Practically every line needs an error check. time_module = PyImport_ImportModule("time"); time_func = PyObject_GetAttrString(time_module, "time"); ctime_func = PyObject_GetAttrString(time_module, "ctime"); time_value = PyObject_CallFunctionObjArgs(time_func, NULL); ctime_value = PyObject_CallFunctionObjArgs(ctime_func, time_value, NULL); text_value = PyUnicode_FromFormat("Today is %S", ctime_value); text = PyUnicode_AsWideCharString(text_value, &cch); wprintf(L"%s\n", text); PyMem_Free(text); Py_DECREF(text_value); Py_DECREF(ctime_value); Py_DECREF(time_value); Py_DECREF(ctime_func); Py_DECREF(time_func); Py_DECREF(time_module); Py_Finalize(); return 0; }

In a larger application, you’d probably call Py_Initialize as part of your startup and Py_Finalize when exiting, and then have occasional calls into the Python engine wherever it made sense. This way, you can write parts of your application in Python and interact with them directly, or allow your users to extend it by providing their own Python scripts.

How does the embeddable zip file help?

Where does the embeddable zip file come into play? While you need a full Python install to compile these programs, when you install them onto a user’s computer, you only need the contents of the embeddable zip, as well as any (pre-built) packages you need. Header files, documentation, tests and shortcuts are not necessary,

Tools like pynsist will help produce installers for pure Python programs like this, using the embeddable zip file so that you don’t have to worry about whether your users already have Python or not.

Why wouldn’t you just run the regular Python installer as part of your application? Let’s play the “what if two programs did this?” game: program X runs the 3.5.0 installer and then program Y runs the 3.5.1 installer. What version does program X now have? If it ran the installer with a custom install directory, it probably has nothing left at all, but at best it now has 3.5.1.

The regular installer is designed for users, not applications. Programs that are not Python, but use Python, need to handle their own installation to make sure they end up with the correct version in the correct location with all the correct files. The embeddable zip file contains the minimum Python runtime for an application to install by itself.

What about other packages?

The embeddable zip file does not include pip. So how do you install packages? If you didn’t read the last sentence of the previous section, here it is again: the embeddable zip file contains the minimum Python runtime for an application to install by itself.

Using the embeddable zip file implies that you want the minimum required files to run your application, and you have your own installer. So if you need extra files at runtime – such as a Python package – you’ll need to install them with your installer. As mentioned above, for developing an application you should have a full Python installation, that does include pip and can install packages locally. But when distributing your application, you need to take responsibility.

While this seems like more work (and it is more work!), the value is worth it. Do you want your installer to fail because it can’t connect to the internet? Do you want your application to fail because a different version of a library was installed? When you provide a bundle for your users, include everything that it needs (tools like pynsist will help do this automatically).

Where else can I get help?

Though I’m writing about the embeddable distribution on a Microsoft blog, this is actually a CPython feature. The doc page is part of the official Python documentation, and bugs or issues should be filed at the CPython bug tracker.

Categories: FLOSS Project Planets

Why are AppStream metainfo files XML data?

Planet KDE - Tue, 2016-04-26 12:20

This is a question raised quite quite often, the last time in a blogpost by Thomas, so I thought it is a good idea to give a slightly longer explanation (and also create an article to link to…).

There are basically three reasons for using XML as the default format for metainfo files:

1. XML is easily forward/backward compatible, while YAML is not

This is a matter of extending the AppStream metainfo files with new entries, or adapt existing entries to new needs.

Take this example XML line for defining an icon for an application:

<icon type="cached">foobar.png</icon>

and now the equivalent YAML:

Icons: cached: foobar.png

Now consider we want to add a width and height property to the icons, because we started to allow more than one icon size. Easy for the XML:

<icon type="cached" width="128" height="128">foobar.png</icon>

This line of XML can be read correctly by both old parsers, which will just see the icon as before without reading the size information, and new parsers, which can make use of the additional information if they want. The change is both forward and backward compatible.

This looks differently with the YAML file. The “foobar.png” is a string-type, and parsers will expect a string as value for the cached key, while we would need a dictionary there to include the additional width/height information:

Icons: cached: name: foobar.png width: 128 height: 128

The change shown above will break existing parsers though. Of course, we could add a cached2 key, but that would require people to write two entries, to keep compatibility with older parsers:

Icons: cached: foobar.png cached2: name: foobar.png width: 128 height: 128

Less than ideal.

While there are ways to break compatibility in XML documents too, as well as ways to design YAML documents in a way which minimizes the risk of breaking compatibility later, keeping the format future-proof is far easier with XML compared to YAML (and sometimes simply not possible with YAML documents). This makes XML a good choice for this usecase, since we can not do transitions with thousands of independent upstream projects easily, and need to care about backwards compatibility.

2. Translating YAML is not much fun

A property of AppStream metainfo files is that they can be easily translated into multiple languages. For that, tools like intltool and itstool exist to aid with translating XML using Gettext files. This can be done at project build-time, keeping a clean, minimal XML file, or before, storing the translated strings directly in the XML document. Generally, YAML files can be translated too. Take the following example (shamelessly copied from Dolphin):

<summary>File Manager</summary> <summary xml:lang="bs">Upravitelj datoteka</summary> <summary xml:lang="cs">Správce souborů</summary> <summary xml:lang="da">Filhåndtering</summary>

This would become something like this in YAML:

Summary: C: File Manager bs: Upravitelj datoteka cs: Správce souborů da: Filhåndtering

Looks manageable, right? Now, AppStream also covers long descriptions, where individual paragraphs can be translated by the translators. This looks like this in XML:

<description>   <p>Dolphin is a lightweight file manager. It has been designed with ease of use and simplicity in mind, while still allowing flexibility and customisation. This means that you can do your file management exactly the way you want to do it.</p>   <p xml:lang="de">Dolphin ist ein schlankes Programm zur Dateiverwaltung. Es wurde mit dem Ziel entwickelt, einfach in der Anwendung, dabei aber auch flexibel und anpassungsfähig zu sein. Sie können daher Ihre Dateiverwaltungsaufgaben genau nach Ihren Bedürfnissen ausführen.</p>   <p>Features:</p>   <p xml:lang="de">Funktionen:</p>   <p xml:lang="es">Características:</p>   <ul>     <li>Navigation (or breadcrumb) bar for URLs, allowing you to quickly navigate through the hierarchy of files and folders.</li>     <li xml:lang="de">Navigationsleiste für Adressen (auch editierbar), mit der Sie schnell durch die Hierarchie der Dateien und Ordner navigieren können.</li>     <li xml:lang="es">barra de navegación (o de ruta completa) para URL que permite navegar rápidamente a través de la jerarquía de archivos y carpetas.</li>     <li>Supports several different kinds of view styles and properties and allows you to configure the view exactly how you want it.</li>     ....   </ul> </description>

Now, how would you represent this in YAML? Since we need to preserve the paragraph and enumeration markup somehow, and creating a large chain of YAML dictionaries is not really a sane option, the only choices would be:

  • Embed the HTML markup in the file, and risk non-careful translators breaking the markup by e.g. not closing tags.
  • Use Markdown, and risk people not writing the markup correctly when translating a really long string in Gettext.

In both cases, we would loose the ability to translate individual paragraphs, which also means that as soon as the developer changes the original text in YAML, translators would need to translate the whole bunch again, which is inconvenient.

On top of that, there are no tools to translate YAML properly that I am aware of, so we would need to write those too.

3. Allowing XML and YAML makes a confusing story and adds complexity

While adding YAML as a format would not be too hard, given that we already support it for DEP-11 distro metadata (Debian uses this), it would make the business of creating metainfo files more confusing. At time, we have a clear story: Write the XML, store it in /usr/share/metainfo, use standard tools to translate the translatable entries. Adding YAML to the mix adds an additional choice that needs to be supported for eternity and also has the problems mentioned above.

I wanted to add YAML as format for AppStream, and we discussed this at the hackfest as well, but in the end I think it isn’t worth the pain of supporting it for upstream projects (remember, someone needs to maintain the parsers and specification too and keep XML and YAML in sync and updated). Don’t get me wrong, I love YAML, but for translated metadata which needs a guarantee on format stability it is not the ideal choice.

So yeah, XML isn’t fun to write by hand. But for this case, XML is a good choice.

Categories: FLOSS Project Planets

Matthias Klumpp: Why are AppStream metainfo files XML data?

Planet Debian - Tue, 2016-04-26 12:20

This is a question raised quite quite often, the last time in a blogpost by Thomas, so I thought it is a good idea to give a slightly longer explanation (and also create an article to link to…).

There are basically three reasons for using XML as the default format for metainfo files:

1. XML is easily forward/backward compatible, while YAML is not

This is a matter of extending the AppStream metainfo files with new entries, or adapt existing entries to new needs.

Take this example XML line for defining an icon for an application:

<icon type="cached">foobar.png</icon>

and now the equivalent YAML:

Icons: cached: foobar.png

Now consider we want to add a width and height property to the icons, because we started to allow more than one icon size. Easy for the XML:

<icon type="cached" width="128" height="128">foobar.png</icon>

This line of XML can be read correctly by both old parsers, which will just see the icon as before without reading the size information, and new parsers, which can make use of the additional information if they want. The change is both forward and backward compatible.

This looks differently with the YAML file. The “foobar.png” is a string-type, and parsers will expect a string as value for the cached key, while we would need a dictionary there to include the additional width/height information:

Icons: cached: name: foobar.png width: 128 height: 128

The change shown above will break existing parsers though. Of course, we could add a cached2 key, but that would require people to write two entries, to keep compatibility with older parsers:

Icons: cached: foobar.png cached2: name: foobar.png width: 128 height: 128

Less than ideal.

While there are ways to break compatibility in XML documents too, as well as ways to design YAML documents in a way which minimizes the risk of breaking compatibility later, keeping the format future-proof is far easier with XML compared to YAML (and sometimes simply not possible with YAML documents). This makes XML a good choice for this usecase, since we can not do transitions with thousands of independent upstream projects easily, and need to care about backwards compatibility.

2. Translating YAML is not much fun

A property of AppStream metainfo files is that they can be easily translated into multiple languages. For that, tools like intltool and itstool exist to aid with translating XML using Gettext files. This can be done at project build-time, keeping a clean, minimal XML file, or before, storing the translated strings directly in the XML document. Generally, YAML files can be translated too. Take the following example (shamelessly copied from Dolphin):

<summary>File Manager</summary> <summary xml:lang="bs">Upravitelj datoteka</summary> <summary xml:lang="cs">Správce souborů</summary> <summary xml:lang="da">Filhåndtering</summary>

This would become something like this in YAML:

Summary: C: File Manager bs: Upravitelj datoteka cs: Správce souborů da: Filhåndtering

Looks manageable, right? Now, AppStream also covers long descriptions, where individual paragraphs can be translated by the translators. This looks like this in XML:

<description>   <p>Dolphin is a lightweight file manager. It has been designed with ease of use and simplicity in mind, while still allowing flexibility and customisation. This means that you can do your file management exactly the way you want to do it.</p>   <p xml:lang="de">Dolphin ist ein schlankes Programm zur Dateiverwaltung. Es wurde mit dem Ziel entwickelt, einfach in der Anwendung, dabei aber auch flexibel und anpassungsfähig zu sein. Sie können daher Ihre Dateiverwaltungsaufgaben genau nach Ihren Bedürfnissen ausführen.</p>   <p>Features:</p>   <p xml:lang="de">Funktionen:</p>   <p xml:lang="es">Características:</p>   <ul>     <li>Navigation (or breadcrumb) bar for URLs, allowing you to quickly navigate through the hierarchy of files and folders.</li>     <li xml:lang="de">Navigationsleiste für Adressen (auch editierbar), mit der Sie schnell durch die Hierarchie der Dateien und Ordner navigieren können.</li>     <li xml:lang="es">barra de navegación (o de ruta completa) para URL que permite navegar rápidamente a través de la jerarquía de archivos y carpetas.</li>     <li>Supports several different kinds of view styles and properties and allows you to configure the view exactly how you want it.</li>     ....   </ul> </description>

Now, how would you represent this in YAML? Since we need to preserve the paragraph and enumeration markup somehow, and creating a large chain of YAML dictionaries is not really a sane option, the only choices would be:

  • Embed the HTML markup in the file, and risk non-careful translators breaking the markup by e.g. not closing tags.
  • Use Markdown, and risk people not writing the markup correctly when translating a really long string in Gettext.

In both cases, we would loose the ability to translate individual paragraphs, which also means that as soon as the developer changes the original text in YAML, translators would need to translate the whole bunch again, which is inconvenient.

On top of that, there are no tools to translate YAML properly that I am aware of, so we would need to write those too.

3. Allowing XML and YAML makes a confusing story and adds complexity

While adding YAML as a format would not be too hard, given that we already support it for DEP-11 distro metadata (Debian uses this), it would make the business of creating metainfo files more confusing. At time, we have a clear story: Write the XML, store it in /usr/share/metainfo, use standard tools to translate the translatable entries. Adding YAML to the mix adds an additional choice that needs to be supported for eternity and also has the problems mentioned above.

I wanted to add YAML as format for AppStream, and we discussed this at the hackfest as well, but in the end I think it isn’t worth the pain of supporting it for upstream projects (remember, someone needs to maintain the parsers and specification too and keep XML and YAML in sync and updated). Don’t get me wrong, I love YAML, but for translated metadata which needs a guarantee on format stability it is not the ideal choice.

So yeah, XML isn’t fun to write by hand. But for this case, XML is a good choice.

Categories: FLOSS Project Planets

denemo @ Savannah: Version 2.0.8 is imminent, please test

GNU Planet! - Tue, 2016-04-26 12:08

Binaries (labelled 0.0.0) are at
http://denemo.org/~jjbenham/gub/uploads/?C=M;O=D
The new features are:

Copy and Paste
Applies to note/chord attributes
CtrlC, Ctrl-V work for these
Copied marking is highlighted
Selection changes color when copied
Improved Acoustic Feedback
Trill makes a short trill sound on entry
Copy attributes sounds
Improved Visual Feedback
Status bar notices are animated
Characters are highlighted in Lyric Verses
Directives are made more legible when cursor is on them
Cadenza Time
For un-metered music
Music can still display in “bars”
New Commands
Tuplet Positioning
Curved Tuplet Brackets
Cadenza on/off uses Cadenza Time, sets smaller note size, editable text
Notes without stems
Multi-line text annotation
Bold, Italic etc now apply to selection
A guard prevents accidental syntax collision
Updated Manual
More detail
Now indexed
Bug Fixes
Command Center search now reliable
Standalone Multi-line text with backslash editing
Pasting into measures that precede a time signature change

Categories: FLOSS Project Planets

Michal &#268;iha&#345;: Weekly phpMyAdmin contributions 2016-W16

Planet Debian - Tue, 2016-04-26 12:00

Last week was again focused on bug fixing due to increased amount of received bug reports on 4.6.0 release. Fortunately most of the annoying bugs are already fixed in git and will be soon released as 4.6.1.

Another bigger task which was started last week was wiki migration. So far we've been using own wiki running MediaWiki and we're migrating it to GitHub wiki. The wiki on GitHub is way simpler, but it seems as better choice for us. During the migration all user documentation will be merged into our documentation, so that it's all in one place and wiki will be targeted on developers.

Handled issues:

Filed under: English phpMyAdmin | 2 comments

Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible builds: week 52 in Stretch cycle

Planet Debian - Tue, 2016-04-26 11:34

What happened in the Reproducible Builds effort between April 17th and April 23rd 2016:

Toolchain fixes

Thomas Weber uploaded lcms2/2.7-1 which will not write uninitialized memory when writing color names. Original patch by Lunar.

The GCC 7 development phase has just begun, so Dhole reworked his patch to make gcc use SOURCE_DATE_EPOCH if set which prompted interesting feedback, but it has not been merged yet.

Alexis Bienvenüe submitted a patch for sphinx to strip Python object memory addresses from the generated documentation.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: cobertura, commons-pool, easymock, eclipselink, excalibur-logkit, gap-radiroot, gluegen2, jabref, java3d, jcifs, jline, jmock2, josql, jtharness, libfann, libgroboutils-java, libjemmy2-java, libjgoodies-binding-java, libjgrapht0.8-java, libjtds-java, liboptions-java, libpal-java, libzeus-jscl-java, node-transformers, octave-msh, octave-secs2d, openmama, rkward.

The following packages have become reproducible after being fixed:

Patches submitted that have not made their way to the archive yet:

  • #821356 against emoslib by boyska: use echo in a portable manner across shells.
  • #822268 against transdecoder by Dhole: set PERL_HASH_SEED=0 when calling the scripts that generate samples.
tests.reproducible-builds.org
  • Steven Chamberlain investigated the performance of our armhf boards which also provided a nice overview of our armhf build network.
  • As i386 has almost been completely tested the order of the architecture displayed has been changed to reflect the fact that i386 is now the 2nd most popular architecture in Debian. (h01ger)
  • In order to decrease the number of blacklisted packages, the first build is now run with a timeout of 18h (previously: 12h) and the 2nd with 24h timeout (previously: 18h). (h01ger)
  • We now also vary the CPU model on amd64 (and soon on i386 too) so that one build is performed using a "AMD Opteron 62xx class CPU" while the other is done using a "Intel Core Processor (Haswell)". This is now possible as proftitbricks.com offers VMs running both types of CPU and have generously increased their sponsorship once more. (h01ger)
  • Profitbricks increased our storage space by 400 GB which will be used to setup a 2nd build node for the coreboot/OpenWrt/NetBSD/Arch Linux/Fedora tests. This 2nd build node will run 398 days in the future for testing reproducibility on a different date.
diffoscope development

diffoscope 52 was released with changes from Mattia Rizzolo, h01ger, Satyam Zode and Reiner Herrmann, who also did the release. Notable changes included:

  • Drop transitional debbindiff package.
  • Let objdump demangle symbols for better readability.
  • Install bin/diffoscope instead of auto-generated script. (Closes: #821777)

As usual, diffoscope 52 is available on Debian, Archlinux and PyPI, other distributions will hopefully soon update.

Package reviews

28 reviews have been added, 11 have been updated and 94 have been removed in this week.

14 FTBFS bugs were reported by Chris Lamb (one being was a duplicate of a bug filed by Sebastian Ramacher an hour earlier).

Misc.

This week's edition was written by Lunar, Holger 'h01ger' Levsen and Chris Lamb and reviewed by a bunch of Reproducible builds folks on IRC.

Categories: FLOSS Project Planets

Martijn Faassen: Morepath 0.14 released!

Planet Python - Tue, 2016-04-26 10:34

Today we released Morepath 0.14 (CHANGES).

What is Morepath? Morepath is a Python web framework that is powerful and flexible due to its advanced configuration engine (Dectate) and an advanced dispatch system (Reg), but at the same time is easy to learn. It's also extensively documented!

The part of this release that I'm the most excited about is not technical but has to do with the community, which is growing -- this release contains significant work by several others than myself. Thanks Stefano Taschini, Denis Krienbühl and Henri Hulski!

New for the community as well is that we have a web-based and mobile-supported chat channel for Morepath. You can join us with a click.

Please join and hang out!

Major new features of this release:

  • Documented extension API
  • New implementation overview.
  • A new document describing how to test your Morepath-based code.
  • Documented how to create a command-line query tool for Morepath configuration.
  • New cookiecutter template to quickly create a Morepath-based project.
  • New releases of various extensions compatible with 0.14. Did you know that Morepath has more.jwtauth, more.basicauth and more.itsdangerous extensions for authentication policy, more.static and more.webassets for static resources, more.chameleon and more.jinja2 for server templating languages, more.transaction to support SQLAlchemy and ZODB transactions and more.forwarded to support the Forwarded HTTP header?
  • Configuration of Morepath-based applications is now simpler and more explicit; we have a new commit method on application classes and applications get automatically committed during runtime if you don't do it first.
  • Morepath now performs host header validation to guard against header poisoning attacks.
  • New defer_class_links directive. This helps in a complicated app that is composed of multiple smaller applications that want to link to each other using the request.class_link method introduced in Morepath 0.13.
  • We've refactored both the publishing/view system and the link generation system. It's cleaner now under the hood.
  • Introduced an official deprecation policy as we prepare for Morepath 1.0, along with upgrade instructions.

Interested? Feedback? Let us know!

Categories: FLOSS Project Planets

Acquia Developer Center Blog: 3 Media Challenges in Drupal, and How to Use the Media Module to Vanquish Them

Planet Drupal - Tue, 2016-04-26 10:07

Drupal 7 out the box offers a good implementation for uploading media, but it has three significant challenges.

Challenge 1: Files should be entities

In Drupal, files should be entities so you can add additional fields to the file type. As an example, when you upload an image you will want your standard image alt attribute, which specifies alternate text for an image, if the image cannot be displayed, or the user is using a screen reader. But you may want to have additional fields, such as photo credit or image caption.

Tags: acquia drupal planet
Categories: FLOSS Project Planets

Weekly Python Chat: Regex Workshop - Part 2

Planet Python - Tue, 2016-04-26 10:00

This is a continuation to the first regular expressions webinar event from March.

We'll learn about substitutions, data normalization, and greediness.

If you did not participate in the first event, I recommend watching the replay before participating in this follow-up event.

Categories: FLOSS Project Planets

Drupalize.Me: Custom Drupal-to-Drupal Migrations with Migrate Tools

Planet Drupal - Tue, 2016-04-26 09:15

Drupal 8.1 now provides a user interface (UI) for conducting a Drupal-to-Drupal migration. Since there is no direct upgrade path for Drupal 6 or 7 to 8, you should become familiar with the migrate system in Drupal, as it will allow you to migrate your content from previous versions to Drupal 8.

Categories: FLOSS Project Planets
Syndicate content