FLOSS Project Planets

Gunnar Wolf: Listadmin — *YES*

Planet Debian - Thu, 2014-10-23 14:05

Petter posted yesterday about Listadmin, the quick way to moderate mailman lists.

Petter: THANKS.

I am a fan of automatization. But, yes, I had never thouguht of doing this. Why? Don't know. But this is way easier than using the Web interface for Mailman:

$ listadmin fetching data for conoc_des@my.example.org ... nothing in queue fetching data for des_polit_pub@my.example.org ... nothing in queue fetching data for econ_apl@my.example.org ... nothing in queue fetching data for educ_ciencia_tec@my.example.org ... nothing in queue fetching data for est_hacend_sec_pub@my.example.org ... [1/1] ============== est_hacend_sec_pub@my.example.org ====== From: sender@example.org Subject: Invitación al Taller Insumo Producto Reason: El cuerpo del mensaje es demasiado grande: 777499 Spam? 0 Approve/Reject/Discard/Skip/view Body/Full/jump #/Undo/Help/Quit ? a Submit changes? [yes] fetching data for fiscal_fin@my.example.org ... nothing in queue fetching data for historia@my.example.org ... nothing in queue fetching data for industrial@my.example.org ... nothing in queue fetching data for medio_amb@my.example.org ... nothing in queue fetching data for mundial@my.example.org ... nothing in queue fetching data for pol_des@my.example.org ... nothing in queue fetching data for sec_ener@my.example.org ... nothing in queue fetching data for sec_prim@my.example.org ... nothing in queue fetching data for trab_tec@my.example.org ... nothing in queue fetching data for urb_reg@my.example.org ... nothing in queue fetching data for global@my.example.org ... nothing in queue

I don't know how in many years of managing several mailing lists I never thought about this! I'm echoing this, as I know several of my readers run mailman as well, and might not be following Planet Debian.

Categories: FLOSS Project Planets

Drupal Watchdog: Drupal Static Caching

Planet Drupal - Thu, 2014-10-23 13:10

Drupal at scale is possible, and indeed, even powerful. Ask someone what they think of Drupal, though, and more often than not they'll tell you that they've heard it's slow. I've seen a lot of poorly-performing Drupal sites in my line of work, and caching is by far the most common reason for the gap between possibility and practice. Even the most basic Drupal installation brings an excellent multi-tier caching architecture to the table, but unfortunately it's easy for developers to break it.

Perhaps the most frustrating caching problem is when developers miss easy opportunities to leverage static caching in their custom modules. By storing computed function results in static PHP variables, further calls to the same method can be made hundreds or thousands of times faster. Taking advantage of this technique requires minimal developer effort: if a result has already been computed, return it; otherwise, store the new result in the cache before returning it.

function apachesolr_static_response_cache($searcher, $response = NULL) { $_response = &drupal_static(__FUNCTION__, array()); if (is_object($response)) { $_response[$searcher] = clone $response; } if (!isset($_response[$searcher])) { $_response[$searcher] = NULL; } return $_response[$searcher]; }

The Apache Solr module uses static caching in several places, such as ensuring that only one Solr search will be performed per request, even when there are several search-related blocks on the page.

Like any caching solution, the performance benefits of static caching depend on whether the speed benefit of cache hits outweighs the performance overhead associated with cache misses. The largest performance gains come from caching functions that are time-consuming, repeated often within a single PHP execution, and expected to return the same value more often than not. This is a well-defined set of conditions, and a lot of Drupal code meets them.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: Introducing Rocker: Docker for R

Planet Debian - Thu, 2014-10-23 12:39

You only know two things about Docker. First, it uses Linux
containers. Second, the Internet won't shut up about it.

-- attributed to Solomon Hykes, Docker CEO

So what is Docker?

Docker is a relatively new open source application and service, which is seeing interest across a number of areas. It uses recent Linux kernel features (containers, namespaces) to shield processes. While its use (superficially) resembles that of virtual machines, it is much more lightweight as it operates at the level of a single process (rather than an emulation of an entire OS layer). This also allows it to start almost instantly, require very little resources and hence permits an order of magnitude more deployments per host than a virtual machine.

Docker offers a standard interface to creation, distribution and deployment. The shipping container analogy is apt: just how shipping containers (via their standard size and "interface") allow global trade to prosper, Docker is aiming for nothing less for deployment. A Dockerfile provides a concise, extensible, and executable description of the computational environment. Docker software then builds a Docker image from the Dockerfile. Docker images are analogous to virtual machine images, but smaller and built in discrete, extensible and reuseable layers. Images can be distributed and run on any machine that has Docker software installed---including Windows, OS X and of course Linux. Running instances are called Docker containers. A single machine can run hundreds of such containers, including multiple containers running the same image.

There are many good tutorials and introductory materials on Docker on the web. The official online tutorial is a good place to start; this post can not go into more detail in order to remain short and introductory.

So what is Rocker?

At its core, Rocker is a project for running R using Docker containers. We provide a collection of Dockerfiles and pre-built Docker images that can be used and extended for many purposes.

Rocker is the the name of our GitHub repository contained with the Rocker-Org GitHub organization.

Rocker is also the name the account under which the automated builds at Docker provide containers ready for download.

Current Rocker Status Core Rocker Containers

The Rocker project develops the following containers in the core Rocker repository

  • r-base provides a base R container to build from
  • r-devel provides the basic R container, as well as a complete R-devel build based on current SVN sources of R
  • rstudio provides the base R container as well an RStudio Server instance

We have settled on these three core images after earlier work in repositories such as docker-debian-r and docker-ubuntu-r.

Rocker Use Case Containers

Within the Rocker-org organization on GitHub, we are also working on

  • Hadleyverse which extends the rstudio container with a number of Hadley packages
  • rOpenSci which extends hadleyverse with a number of rOpenSci packages
  • r-devel-san provides an R-devel build for "Sanitizer" run-time diagnostics via a properly instrumented version of R-devel via a recent compiler build
  • rocker-versioned aims to provided containers with 'versioned' previous R releases and matching packages

Other repositories will probably be added as new needs and opportunities are identified.


The Rocker effort supersedes and replaces earlier work by Dirk (in the docker-debian-r and docker-ubuntu-r GitHub repositories) and Carl. Please use the Rocker GitHub repo and Rocker Containers from Docker.com going forward.

Next Steps

We intend to follow-up with more posts detailing usage of both the source Dockerfiles and binary containers on different platforms.

Rocker containers are fully functional. We invite you to take them for a spin. Bug reports, comments, and suggestions are welcome; we suggest you use the GitHub issue tracker.


We are very appreciative of all comments received by early adopters and testers. We also would like to thank RStudio for allowing us the redistribution of their RStudio Server binary.

Published concurrently at rOpenSci blog and Dirk's blog.


Dirk Eddelbuettel and Carl Boettiger

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Aten Design Group: Organizing Features for Complex Drupal Sites

Planet Drupal - Thu, 2014-10-23 12:34

We build Drupal sites with a combination of site code and the settings that Drupal stores in the database. Settings are easy for someone with no coding experience to change; but we can't track setting changes in the database as easily as we can track changes in code.

Drupal’s Features module is the most widely adopted solution in Drupal 7 for storing settings as version-controlled configuration in code. Like with most things Drupal, there isn’t just one approach to configuration in code: a few Aten folks have been working on another approach called CINC.

If you do decide to use the Features module, you’ll quickly learn there isn’t a single way of creating features. Drupal Kit provides some guidelines, but structuring and organizing Features-created modules is largely left up to the developer. Things can quickly get unwieldy on a complex site with multiple developers and many Features. In cases where Features is a project requirement, we’ve created a process that has worked well for us.

Be consistent with Features naming conventions

Our Feature names follow this convention: [projectshortname][summary][package_name]_feature

  • [projectshortname] This three-character code is decided at the beginning of a project and keeps the custom module and feature names unique to the project.
  • [summary] This is a super-short summary of the specifics of the feature.
  • [package_name] This should closely follow the package naming convention set for the project. Keep reading to learn more about package names.
  • feature This lets others know that this module was created by Features and also helps keep the module name unique.
Examples in practice
  • Page content type - abc_page_entity_feature
  • Image style definitions - abc_image_styles_config_feature
  • Blog View - abc_blog_views_feature
Categorize Features by providing a package name

When creating a new Feature, you can specify a package name. This is the same as defining “package = [something]” in a custom module .info file. The Package name groups your feature on the Features list page and the overall modules page. Being consistent with package names makes it easier for other developers and clients to find available features. We suggest nailing down package names at the beginning of a project. Our package names typically look something like this:

  • [projectshortname] Configuration (image styles, text formats, search settings, various module settings)
  • [projectshortname] Entity (content types, fields, field collections, taxonomies, etc.)
  • [projectshortname] Views (views defined by views module)
  • [projectshortname] Page (page manager & panels)
Create a directory structure for modules created by Features

Our typical modules directory (sites/all/modules) is structured like this:

  • contrib (modules downloaded from Drupal.org)
  • custom (modules that aren’t contrib and specific to the project)
  • features (modules created by Features)
  • patched (patched contrib modules)

The Features directory (sites/all/modules/features) is then broken down a bit further to make it easier to find what you need. We try to make this mirror package names as much as possible.

  • features
    • configuration
    • entity
      • content_type
      • field_collection
      • shared
      • taxonomy
    • page
    • views
Limit cross-Feature dependencies

It is normal for a Feature to be dependent on other Drupal modules. For example, a content type Feature will be dependent on the Field Group module if using field groups. When creating content type Features, fields used by the content type are tightly coupled with each feature. The quickest way to a cross-Feature dependency is by creating two content type Features that have several shared fields (e.g. body, tags). Content Type One may contain the field base for the body field. Content Type Two also uses the body and now has a dependency on Content Type One.

Cross-Feature dependencies make it hard to have Features that are truly independent and reusable across projects. Our way around this is being very intentional about when we use shared fields and adding them in a completely different Feature. We call this Feature “Shared Field Base”. This shared Feature allows Content Type One and Content Type Two to be completely independent of one another.

At the end of the day, the important thing is to pick an approach and stick with it throughout the project. We’ve created a process that works well for us, but there are other approaches. How does your approach differ from ours? What other tips do you have for creating features and keeping them organized? Are you excited about Drupal 8’s plans for configuration in code?

Categories: FLOSS Project Planets

groups.drupal.org frontpage posts: Unsolicited email incident on Groups.drupal.org

Planet Drupal - Thu, 2014-10-23 11:57

Hi all,

2 days ago there was an unsolicited email incident on Groups.drupal.org. A number of people were added to a group without their permission and subsequently received email notifications for posts and comments in that group. This was done via 'Add members' functionality, which was available to all group organizers on Groups.drupal.org. The problem was reported via the Groups issue queue and other channels and site maintainers took immediate steps to delete the group in question and disable comments on posts to stop email notifications going out to all affected users.

Our next step was to disable 'Add members' functionality to prevent such situations in the future. Group organizers still have 'Invite friend' functionality available to invite people to their groups, which will require users to accept invitation, giving their explicit permission to be added to the group.

We apologize for the inconvenience this caused.

Groups.drupal.org team

Categories: FLOSS Project Planets

Emmanuel Lecharny: Free ADSL, où comment ne pas avoir d'ADSL pendant plus d'un mois...

Planet Apache - Thu, 2014-10-23 11:29
Ne plus avoir d'électricité pendant 2 jours, sauf en cas de tempête, est juste inconcevable en France. Pour une connexion ADSL, vous pouvez rester coupé pendant plus d'un mois sans que cela ne semble soucier votre provider, en l'occurence Free.

On ne parle pas d'un village au fond d'une vallée reculée du haut morvan, là. C'est à Puteaux, dans la proche banlieue parisienne.

Tout commence par une intervention sur une ligne par l'opérateur historique, le 24 septembre 2014. Résultat, une ligne ADSL dont le débit chute brutalement à 500Kbits (54db) au lieu de 10Mbits (42db). Typique d'une ligne abimée, ou d'un signal affaibli par une interaction magnétique ou un cable de mauvaise qualité.

Et là, la boucle infernale commence : ouverture de ticket d'incident chez Free, une semaine de délai avant vérification des équipements (et oui, c'est toujours chez vous que le problème se situe, par défaut !), puis suite à la vérification du bon fonctionnement de l'équipement, on bascule chez l'opérateur historique (entendez FT), par le biais de ce qui s'appelle un GAMOT. C'est encore une semaine d'attente...

Suite à quoi, généralement, FT ne détecte bien sûr aucun problème, et ferme le GAMOT. Vous venez de perdre 2 semaines...

Voilà, vous devez réouvrir un ticket chez FREE, qui va cette fois devoir aller plus loin que de constater que votre matériel fonctionne bien et rouvrir un second GAMOT. Puisque qu'aucun des deux opérateurs ne reconnaît un problème chez lui, c'est qu'il doit y avoir un problème entre les deux...

(Entre temps, votre voisin, qui est abonné chez FT, lui, a été rétabli dans les 2 jours...)

Que se passe-t-il lors de cette "procédure d'expertise" ? Et bien les deux sociétés font intervenir deux techniciens pour tester la connection de bout en bout. Généralement, ils découvrent à cette occasion un mauvais branchement, et le rétablissement a lieu - si tout va bien ! -.

En pratique, il faut compter un bon mois, parfois moins, parfois plus.

Et ce n'est pas normal.

Dans le meilleur des cas, on parlera d'incompétence, mais reste à savoir chez qui. En pratique, la question de la procédure mise en place chez les deux sociétés (et on peut imaginer qu'il en est de même avec d'autres opérateurs tiers) a pour résultat une distortion claire des règles du jeu : si vous êtes chez l'opérateur historique de bout en bout, vous êtes rétabli en 2 jours. Dans le cas contraire, chacun se renvoit la balle pendant un bon mois...

Free n'ouvre un GAMOT qu'après avoir vérifié l'équipement, très certainement à cause des coûts facturés par l'opérateur pour le traitement du GAMOT, dans les cas où le problème viendrait du matériel ou d'un problème de connection au domicile. Pourquoi pas, sauf qu'il faut compter une bonne semaine pour avoir un rendez-vous avec un technicien...
L'opérateur historique va de son côté faire le minimum, à savoir tester la synchro,  puisque généralement cela suffit pour détecter un pb de connection, bien évidemment sans régler votre problème de débit !

La question qui se pose à ce point, c'est de savoir s'il n'y a pas dans ce protocole d'intervention une volonté active ou passive de favoriser les clients de l'operateur historique ? On ne parle pas évidemment de l'incapacité de l'opérateur alternatif à fournir un service après-vente digne de se nom, faute de disposer du personnel suffisant...

A ce point, après un mois sans service, il convient de se poser la question du recours en justice, éventuellement dans le cadre d'une "action de groupe" - puisque c'est aujourd'hui chose possible en France - pour sortir de ces parties de ping-pong infernales ! Mais sauf à condamner lourdement les opérateurs, pour les forcer à réduire ces délais inaceptable, je ne vois pas comment les choses pourraient évoluer.

Mais ne rêvez pas, même dans cette hypothèse, la procédure prendra des années... Il faudra d'abord s'adresser à une société de défense des consommateurs agréée (il y en a 17), et espérer qu'elles acceptent de lancer la procédure. Cela ne préjuge en rien des délais d'obtention d'un jugement - sans compter qu'il peut être rendu en faveur des opérateurs -, sachant que ces derniers peuvent évidemment faire appel, se pourvoir en cassation...

Cela dit, il y a un moment où l'inaction vaut acceptation...
Categories: FLOSS Project Planets

KDE Connect feature brainstorming

Planet KDE - Thu, 2014-10-23 11:00

In a recent informal meeting of KDE users in Seattle, Andrew Lake from the KDE Visual Design Group gave me some ideas he had for KDE Connect. Since I think that we all have a different vision and different ideas that are possible to implement on top of KDE Connect, I decided to write this post asking for your ideas, in some kind of community brainstorming.

Also, since the last time I made a post about possible features for KDE Connect, a lot of them have been implemented or are work in progress, so i hope this post achieves the same effect :)

Here is my personal list of possible features:

  • Plugin for power management (sleep, shut down, etc).
  • “Find my phone” plugin, that makes your phone ring even if it is silenced.
  • Add media controls from the Android lock screen.
  • Plugin to keep your computer unlocked while phone is reachable.
  • Use the phone as a location provider for the desktop.
  • Akonady resources sync with Android (contacts, calendar…).
  • Plugin to print from your phone to your computer’s printer.
  • Add support for drag’n drop for touchpad plugin.
  • Port to other desktops and platforms: Gnome, Unity, MacOS, Windows…
  • Publish and maintain the iOS port that Yang Qiao begun this GSOC (any iPhone user around?)

And here is some stuff is already being worked on:

  • Answer SMS from the desktop (by David Edmunson).
  • Pair with a specific IP address or hostname (by Achilleas Koutsou).

Now it’s your time to come up with more ideas in the comments! And of course feel free to give your opinion/enhance the ideas on my list.

Update: As Aleix Pol suggested, I created a todo.kde.org for KDE Connect that I will be updating with the ideas that come up in the comments.

Categories: FLOSS Project Planets

Colm O hEigeartaigh: Apache CXF Authentication and Authorization test-cases IV

Planet Apache - Thu, 2014-10-23 10:36
This is the fourth in a series of posts on authentication and authorization test-cases for web services using Apache CXF. The first focused on different ways to authenticate and authorize UsernameTokens for JAX-WS services. The second looked at more advanced examples such as using Kerberos, WS-Trust, XACML, etc. The third looked at different ways of achieving SSO in CXF for both JAX-WS and JAX-RS services. This post gives some examples of authenticating and authorizing JAX-RS services in Apache CXF. I also included the SSO examples relevant to JAX-RS from the previous post. The projects are:
  • cxf-jaxrs-xmlsecurity: This project shows how to use XML Signature (and Encryption) to secure JAX-RS services. In particular, see the AuthenticationTest.
  • cxf-jaxrs-sts: This project demonstrates how to use HTTP/BA for authentication for JAX-RS, where the credentials are dispatched as a UsernameToken are dispatched to an STS instance for authentication. In addition, the project shows how to authorize the request by asking the STS for a SAML Token containing the roles of the user.
  • cxf-kerberos: This project (covered previously for JAX-WS) has been updated with a test that shows how to use Kerberos with JAX-RS.
  • cxf-saml-sso: This project shows how to leverage SAML SSO with Apache CXF to achieve SSO for a JAX-RS service. CXF supports the POST + redirect bindings of SAML SSO for JAX-RS endpoints. As part of this demo, a mock CXF-based IdP is provided which authenticates a client using HTTP/BA and issues a SAML token using the CXF STS. Authorization is also demonstrated using roles embedded in the issued SAML token. 
  • cxf-fediz-federation-sso: This project shows how to use the new CXF plugin of Apache Fediz 1.2.0 to authenticate and authorize clients of a JAX-RS service using WS-Federation. This feature will be documented more extensively at a future date, and is considered experimental for now. Please play around with it and provide feedback to the CXF users list.
Categories: FLOSS Project Planets

New Development Builds

Planet KDE - Thu, 2014-10-23 09:31

It’s been about a month since we last published new development builds… And a lot has happened in the meantime. Now, before everyone starts downloading, a warning:


And we mean it. Not only does this build include a month of Dmitry’s work, but we also merged in Mohit Goyal’s Summer of Code work. That touched a lot of stuff… There is at least one known crash — the sketch brushes are broken. We’re working on fixing, but we need more testing! Please join the forum and report your findings!

Okay, so what’s in here?

Let’s take Mohit’s work first:

  1. Dirty Presets: Keeps temporary tweaks made to the preset till the session ends.
    Go to the Brush Editor box. Bottom left — select “Temporarily save tweaks to presets”. Any time you make a change to any setting in the preset — the textbox will turn pink and a “+” symbol will appear on the icon. The Reload button is used to reset the tweaks for that particular preset.
  2. Locked Settings: Keeps settings constant across presets
    In the brush editor box, for any paint option like “Size” on the left, there will be a “link” icon. Right click on that option to Lock the option. Now that particular setting will remain constant across all presets. If you change it in one preset – the changes will reflect across all presets. To unlock any option: right click on a locked option and click on Drop Locked Settings. You can either use these settings in the preset or load the last settings available in the preset.
  3. Cumulative Undo/Redo
    1. To use this feature, you will have to first have to go to Settings->Dockers->Undo History to activate the docker.
    2. Next right click on <empty> or on any stroke in the undo docker and select “Use Cumulative Undo/Redo”
    3. This feature merges commands together so the the user doesn’t have to undo a particular group one by one and has a much larger undo history than the initial 30 strokes. The feature works on three configurable parameters :
    Time before merging strokes together: while strokes are made, the code keeps checking for a particular timelapse of T seconds before it merges the groups together
    Time to group the strokes : According to this parameter — groups are made. Every stroke is put into the same group till two consecutive strokes have a time gap of more than T seconds. Then a new group is started.
    Individual strokes to leave at the end : A user may want to keep the ability of Undoing/Redoing his last N strokes. Once N is crossed — the earlier strokes are merged into the group’s first stroke.

Next, Scott Petrovic has been putting a lot of polish on Krita!

  1. Keep Eraser size. Now, toggling Erase mode with the ‘E’ shortcut will save the size of your current preset. That means that if you paint with a small brush, then erase with a big brush, you only have to press ‘E’.
  2. Saving tool options. Most of the tools now save your settings between sessions: fill, multi-line, gradient, rectangle, ellipse, line, move, text, crop, freehand…
  3. And a lot of polish in other places as well — better defaults, labels for previously unclear sliders, ‘C’ is crop tool now,  and more.

Dmitry Kazakov has worked on the transform tool. While the cage tool got plenty of fixes, he also implemented a whole new mode: liquify. This needs testing now! Now the transform tool option pane is seriously overloaded…

If you’ve got a good proposal for a better layout, please share your ideas on the forum! Right now, Dmitry is working on the next part of the Kickstarter feature list: non-destructive transformation masks.


Boudewijn Rempt did some more OSX porting work — the file dialogs should now be native and work correctly — and started working on loading and saving resource blocks, layer styles and transparency masks in PSD.

And we’ve got a nice icon clean-up, too, courtesy of Wolthera and Timothée.

Sven Langkamp has worked mostly on the MVC branch, which is where we’re trying make Krita open more than one image in a window. He got it stable enough that we could start testing for real, and Wolthera dove in and made reports… There’s a lot of work to be done here!

So, here are the new builds:

For Windows, I’ve added a zip file next to the MSI installer. Unzip the zip file anywhere, go into krita/bin, double-click krita.exe and you can test this build without breaking your real installation. Remember — this build is not ready for daily work, it’s experimental.

For OSX, here’s a new DMG. Other than the file dialog fix, there’s no new OSX specific fixes in here. We’re still mulling over ways to fund a proper OSX port, among other things.

For Linux, Krita Studio users have access to a package for CentOS 6.5, and Krita Lime has been updated for Ubuntu users.


Categories: FLOSS Project Planets

Mike Driscoll: PyWin32: How to Get an Application’s Version Number

Planet Python - Thu, 2014-10-23 08:30

Occasionally you will need to know what version of software you are using. The normal way to find this information out is usually done by opening the program, going to its Help menu and clicking the About menu item. But this is a Python blog and we want to do it programmatically! To do that on a Windows machine, we need PyWin32. In this article, we’ll look at two different methods of getting the version number of an application.

Getting the Version with win32api

First off, we’ll get the version number using PyWin32’s win32api module. It’s actually quite easy to use. Let’s take a look:

from win32api import GetFileVersionInfo, LOWORD, HIWORD   def get_version_number(filename): try: info = GetFileVersionInfo (filename, "\\") ms = info['FileVersionMS'] ls = info['FileVersionLS'] return HIWORD (ms), LOWORD (ms), HIWORD (ls), LOWORD (ls) except: return "Unknown version"   if __name__ == "__main__": version = ".".join([str (i) for i in get_version_number ( r'C:\Program Files\Internet Explorer\iexplore.exe')]) print version

Here we call GetFileVersionInfo with a path and then attempt to parse the result. If we cannot parse it, then that means that the method didn’t return us anything useful and that will cause an exception to be raised. We catch the exception and just return a string that tells us we couldn’t find a version number. For this example, we check to see what version of Internet Explorer is installed.

Getting the Version with win32com

To make things more interesting, in the following example we check Google Chrome’s version number using PyWin32’s win32com module. Let’s take a look:

# based on http://stackoverflow.com/questions/580924/python-windows-file-version-attribute from win32com.client import Dispatch   def get_version_via_com(filename): parser = Dispatch("Scripting.FileSystemObject") version = parser.GetFileVersion(filename) return version   if __name__ == "__main__": path = r"C:\Program Files\Google\Chrome\Application\chrome.exe" print get_version_via_com(path)

All we do here is import win32com’s Dispatch class and create an instance of that class. Next we call its GetFileVersion method and pass it the path to our executable. Finally we return the result which will be either the number or a message saying that no version information was available. I like this second method a bit more in that it automatically returns a message when no version information was found.

Wrapping Up

Now you know how to check an application version number on Windows. This can be helpful if you need to check if key software needs to be upgraded or perhaps you need to make sure it hasn’t been upgraded because some other application requires the older version.

Related Reading

Categories: FLOSS Project Planets

Alessio Treglia: Bits from the Debian Multimedia Maintainers

Planet Debian - Thu, 2014-10-23 07:30

This brief announcement was released yesterday to the debian-devel-announce mailing list.



The Debian Multimedia Maintainers have been quite active since the Wheezy release, and have some interesting news to share for the Jessie release. Here we give you a brief update on what work has been done and work that is still ongoing.

Let’s see what’s cooking for Jessie then.


Frameworks and libraries Support for many new media formats and codecs.

The codec library libavcodec, which is used by popular media playback applications including vlc, mpv, totem (using gstreamer1.0-libav), xine, and many more, has been updated to the latest upstream release version 11 provided by Libav. This provides Debian users with HEVC playback, a native Opus decoder, Matroska 3D support, Apple ProRes, and much more. Please see libav’s changelog for a full list of functionality additions and updates.


libebur128 is a free implementation of the European Broadcasting Union Loudness Recommendation (EBU R128), which is essentially an alternative to ReplayGain. The library can be used to analyze audio perceived loudness and subsequentially normalize the volume during playback.


libltc provides functionalities to encode and decode Linear (or Longitudinal) Timecode (LTC) from/to SMPTE data timecode.


libva and the driver for Intel GPUs has been updated to the 1.4.0 release. Support for new GPUs has been added. libva now also supports Wayland.

Pure Data

A number of new additional libraries (externals) will appear in Jessie, including (among others) Eric Lyon’s fftease and lyonpotpourrie, Thomas Musil’s iemlib, the pdstring library for string manipulation and pd-lua that allows to write Pd-objects in the popular lua scripting language.



LASH Audio Session Handler was abandoned upstream a long time ago in favor of the new session management system, called ladish (LADI Session Handler). ladish allows users to run many JACK applications at once and save/restore their configuration with few mouse clicks.

The current status of the integration between the session handler and JACK may be summarized as follows:

  • ladish provides the backend;
  • laditools contains a number of useful graphical tools to tune the session management system’s whole configuration (including JACK);
  • gladish provides a easy-to-use graphical interface for the session handler.

Note that ladish uses the D-Bus interface to the jack daemon, therefore only Jessie’s jackd2 provides support for and also cooperates fine with it.


Plugins: LV2 and LADSPA

Debian Jessie will bring the newest 1.10.0 version of the LV2 technology. Most changes affect the packaging of new plugins and extensions, a brief list of packaging guidelines is now available.
A number of new plugins and development tools too have been made available during the Jessie development cycle:

LV2 Toolkit

LVTK provides libraries that wrap the LV2 C API and extensions into easy to use C++ classes. The original work for this was mostly done by Lars Luthman in lv2-c++-tools.

Vee One Suite

The whole suite by Rui Nuno Capela is now available in Jessie, and consists of three components:

  • drumkv1: old-school drum-kit sampler synthesizer
  • samplv1: polyphonic sampler
  • synthv1: analog-style 4-oscillator substractive synthesizer

All three are provided in both forms of LV2 plugins and stand-alone JACK client. JACK session, JACK MIDI, and ALSA MIDI are supported too.

x42-plugins and zam-plugins

LV2 bundles containing many audio plugins for high quality processing.


Fomp is an LV2 port of the MCP, VCO, FIL, and WAH plugins by Fons Adriaensen.

Some other components have been upgraded to more recent upstream versions:

  • ab2gate: 1.1.7
  • calf: 0.0.19+git20140915+5de5da28
  • eq10q: 2.0~beta5.1
  • NASPRO: 0.5.1

We’ve packaged ste-plugins, Fons Adriaensen’s new stereo LADSPA plugins bundle.

A major upgrade of frei0r, namely the standard collection for the minimalistic plugin API for video effects, will be available in Jessie.


New multimedia applications Advene

Advene (Annotate Digital Video, Exchange on the NEt) is a flexible video
annotation application.


The new generation of the popular digital audio workstation will make its very first appearance in Debian Jessie.


Qt4 front-end for the MPD daemon.


Csound for jessie will feature the new major series 6, with the improved IDE CsoundQT. This new csound supports improved array data type handling, multi-core rendering and debugging features.


DIN Is Noise is a musical instrument and audio synthesizer that supports JACK audio output, MIDI, OSC, and IRC bot as input sources. It could be extended and customized with Tcl scripts too.


dvd-slideshow consists of a suite of command line tools which come in handy to make slideshows from collections of pictures. Documentation is provided and available in `/usr/share/doc/dvd-slideshow/’.


DVDwizard can fully automate the creation of DVD-Video filesystem. It supports graphical menus, chapters, multiple titlesets and multi-language streams. It supports both PAL and NTSC video modes too.


Flowblade is a video editor – like the popular KDenlive based on the MLT engine, but more lightweight and with some difference in editing concepts.


Forked-daapd switched to a new, active upstream again dropping Grand Central Dispatch in favor of libevent. The switch fixed several bugs and made forked-daapd available on all release architectures instead of shipping only on amd64 and i386. Now nothing prevents you from setting up a music streaming (DAAP/DACP) server on your favorite home server no matter if it is based on mips, arm or x86!


HTTP Ardour Video Daemon decodes still images from movie files and serves them via HTTP. It provides frame-accurate decoding and is main use-case is to act as backend and second level cache for rendering the
videotimeline in Ardour.

Groove Basin

Groove Basin is a music player server with a web-based user interface inspired by Amarok 1.4. It runs on a server optionally connected to speakers. Guests can control the music player by connecting with a laptop, tablet, or smart phone. Further, users can stream their music libraries remotely.
It comes with a fast, responsive web interface that supports keyboard shortcuts and drag drop. It also provides the ability to upload songs, download songs, and import songs by URL, including YouTube URLs. Groove Basin supports Dynamic Mode which automatically queues random songs, favoring songs that have not been queued recently.
It automatically performs ReplayGain scanning on every song using the EBU R128 loudness standard, and automatically switches between track and album mode. Groove Basin supports the MPD protocol, which means it is compatible with MPD clients. There is also a more powerful Groove Basin protocol which you can use if the MPD protocol does not meet your needs.


HandBrake, a versatile video transcoder, is now available for Jessie. It could convert video from nearly any format to a wide range of commonly supported codecs.


New jackd midiclock utility made by Robin Gareus.


Laborejo, Esperanto for “Workshop”, is used to craft music through notation. It is a LilyPond GUI frontend, a MIDI creator and a tool collection to inspire and help music composers.


mpv is a movie player based on MPlayer and mplayer2. It supports a wide variety of video file formats, audio and video codecs, and subtitle types. The project focuses mainly on modern systems and encourages developer activity. As such, large portions of outdated code originating from MPlayer have been removed, and many new features and improvements have been added. Note that, although there are still some similarities to its predecessors, mpv should be considered a completely different program (e.g. lacking compatibility with both mplayer and mplayer2 in terms of command-line arguments and configuration).


SMTube is a stand-alone graphical video browser and player, which makes YouTube’s videos browsing, playing, and download such a piece of cake.
It has so many features that, we are sure, will make YouTube lovers very, very happy.


Sonic Visualiser Application for viewing and analysing the contents of music audio files.


SoundScapeRenderer (aka SSR) is a (rather) easy to use render engine for spatial audio, that provides a number of different rendering algorithms, ranging from binaural (headphone) playback via wave field synthesis to higher-order ambisonics.


videotrans is a set of scripts that allow its user to reformat existing movies into the VOB format that is used on DVDs.


XBMC has been partially rebranded as XBMC from Debian to make it clear that it is changed to conform to Debian’s Policy. The latest stable release, 13.2 Gotham will be part of Jessie making Debian a good choice for HTPC-s.


Binaural stereo signals converter made by Fons Adriaensen


Stereo monitoring organiser for jackd made by Fons Adriaensen


Jack clients to transmit multichannel audio over a local IP network made by Fons Adriaensen


Radium Compressor is the system compressor of the Radium suite. It is provided in the form of stand-alone JACK application.


Multimedia Tasks

With Jessie we are shipping a set of multimedia related tasks.
They include package lists for doing several multimedia related tasks. If you are interested in defining new tasks, or tweaking the current, existing ones, we are very much interested in hearing from you.


Upgraded applications and libraries
  • Aeolus: 0.9.0
  • Aliki: 0.3.0
  • Ams: 2.1.1
  • amsynth: 1.4.2
  • Audacious: 3.5.2
  • Audacity: 2.0.5
  • Audio File Library: 0.3.6
  • Blender: 2.72b
  • Bristol: 0.60.11f
  • C* Audio Plugin Suite: 0.9.23
  • Cecilia: 5.0.9
  • cmus: 2.5.0
  • DeVeDe: 3.23.0-13-gbfd73f3
  • DRC: 3.2.1
  • EasyTag: 2.2.2
  • ebumeter: 0.2.0
  • faustworks: 0.5
  • ffDiaporama: 1.5
  • ffms: 2.20
  • gmusicbrowser: 1.1.13
  • Hydrogen:
  • IDJC: 0.8.14
  • jack-tools: 20131226
  • LiVES: 2.2.6
  • mhWaveEdit: 1.4.23
  • Mixxx: 1.11.0
  • mp3fs: 0.91
  • MusE: 2.1.2
  • Petri-Foo: 0.1.87
  • PHASEX: 0.14.97
  • QjackCtl: 0.3.12
  • Qtractor: 0.6.3
  • rtaudio: 4.1.1
  • Rosegarden: 14.02
  • rtmidi: 2.1.0
  • SoundTouch: 1.8.0
  • stk: 4.4.4
  • streamtuner2: 2.1.3
  • SuperCollider: 3.6.6
  • Synfig Studio: 0.64.1
  • TerminatorX: 3.90
  • tsdecrypt: 10.0
  • Vamp Plugins SDK: 2.5
  • VLC: Jessie will release with the 2.2.x series of VLC
  • XCFA: 4.3.8
  • xwax: 1.5
  • xjadeo: 0.8.0
  • x264: 0.142.2431+gita5831aa
  • zynaddsubfx: 2.4.3


What’s not going to be in Jessie

With the aim to improve the overall quality of the multimedia software available in Debian, we have dropped a number of packages which were abandoned upstream:

  • beast
  • flumotion
  • jack-rack
  • jokosher
  • lv2fil (suggested replacement for users is eq10q or calf eq)
  • phat
  • plotmm
  • specimen (suggested replacement for users is petri-foo – fork of specimen)
  • zynjacku (suggested replacement for users is jalv)

We’ve also dropped mplayer, presently nobody seems interested in maintaining it.
The suggested replacements for users are mplayer2 or mpv. Whilst the former is mostly compatible with mplayer in terms of command-line arguments and configuration (and adds a few new features too), the latter adds a lot of new features and improvements, and it is actively maintained upstream.

Please note that although the mencoder package is no longer available anymore, avconv and mpv do provide encoding functionality. For more information see avconv’s manual page and documentation, and mpv’s encoding documentation.


Broken functionalities

rtkit under systemd is broken at the moment.


Activity statistics

More information about team’s activity are available.


Where to reach us

The Debian Multimedia Maintainers can be reached at pkg-multimedia-maintainers AT lists.alioth.debian.org for packaging related topics, or at debian-multimedia AT lists.debian.org for user and more general discussion.
We would like to invite everyone interested in multimedia to join us there. Some of the team members are also in the #debian-multimedia channel on OFTC.


Alessio Treglia
on behalf of the Debian Multimedia Maintainers


Categories: FLOSS Project Planets

Kushal Das: More Fedora in life

Planet Python - Thu, 2014-10-23 05:46

I am using Fedora from the very fast release. Started contributing to the project from around 2005. I worked on Fedora during my free time, I did that before I joined Red Hat in 2008, during the time I worked in Red Hat and after I left Red Hat last year.

But for the last two weeks I am working on Fedora not only on my free times but also as my day job. I am the Fedora Cloud Engineer as a part of Fedora Engineering team and part of the amazing community of long time Fedora Friends.

Categories: FLOSS Project Planets

Kushal Das: Using docker in Fedora for your development work

Planet Python - Thu, 2014-10-23 05:30

Last week I worked on DNF for the first time. In this post I am going to explain how I used Docker and a Fedora cloud instance for the same.

I was using a CentOS vm as my primary work system for last two weeks and I had access to a cloud. I created a Fedora 20 instance there.

The first step was to install docker in it and update the system, I also had to upgrade the selinux-policy package and reboot the instance.

# yum upgrade selinux-policy -y; yum update -y # reboot # yum install docker-io # systemctl start docker # systemctl enable docker

Then pull in the Fedora 21 Docker image.

# docker pull fedora:21

The above command will take time as it will download the image. After this we will start a Fedora 21 container.

# docker run -t -i fedora:21 /bin/bash

We will install all the required dependencies in the image, use yum as you do normally and then get out by pressing Crrl+d.

[root@3e5de622ac00 /]# yum install dnf python-nose python-mock cmake -y

Now we can commit this as a new image so that we can reuse it in the future. We do this by docker commit command.

# docker commit -m "with dnf" -a "Kushal Das" 3e5de622ac00 kushaldas/dnfimage

After this the only thing left to start a container with this newly created image and mounted directory from host machine.

# docker run -t -i -v /opt/dnf:/opt/dnf kushaldas/dnfimage /bin/bash

This command assumes the code is already in the /opt/dnf of the host system. Even if I managed to do something bad in that container, my actual host is safe. I just have to get out of the container and start a new one.

Categories: FLOSS Project Planets

Erich Schubert: Clustering 23 mio Tweet locations

Planet Debian - Thu, 2014-10-23 05:01
To test scalability of ELKI, I've clustered 23 million Tweet locations from the Twitter Statuses Sample API obtained over 8.5 months (due to licensing restrictions by Twitter, I cannot make this data available to you, sorry. 23 million points is a challenge for advanced algorithms. It's quite feasible by k-means; in particular if you choose a small k and limit the number of iterations. But k-means does not make a whole lot of sense on this data set - it is a forced quantization algorithm, but does not discover actual hotspots. Density-based clustering such as DBSCAN and OPTICS are much more appropriate. DBSCAN is a bit tricky to parameterize - you need to find the right combination of radius and density for the whole world. Given that Twitter adoption and usage is quite different it is very likely that you won't find a single parameter that is appropriate everywhere. OPTICS is much nicer here. We only need to specify a minimum object count - I chose 1000, as this is a fairly large data set. For performance reasons (and this is where ELKI really shines) I chose a bulk-loaded R*-tree index for acceleration. To benefit from the index, the epsilon radius of OPTICS was set to 5000m. Also, ELKI allows using geodetic distance, so I can specify this value in meters and do not get much artifacts from coordinate projection. To extract clusters from OPTICS, I used the Xi method, with xi set to 0.01 - a rather low value, also due to the fact of having a large data set. The results are pretty neat - here is a screenshot (using KDE Marble and OpenStreetMap data, since Google Earth segfaults for me right now):
Some observations: unsurprisingly, many cities turn up as clusters. Also regional differences are apparent as seen in the screenshot: plenty of Twitter clusters in England, and low acceptance rate in Germany (Germans do seem to have objections about using Twitter; maybe they still prefer texting, which was quite big in Germany - France and Spain uses Twitter a lot more than Germany). Spam - some of the high usage in Turkey and Indonesia may be due to spammers using a lot of bots there. There also is a spam cluster in the ocean south of Lagos - some spammer uses random coordinates [0;1]; there are 36000 tweets there, so this is a valid cluster... A benefit of OPTICS and DBSCAN is that they do not cluster every object - low density areas are considered as noise. Also, they support clusters of different shape (which may be lost in this visualiation, which uses convex hulls!) and different size. OPTICS can also produce a hierarchical result. Note that for these experiments, the actual Tweet text was not used. This has a rough correspondence to Twitter popularity "heatmaps", except that the clustering algorithms will actually provide a formalized data representation of activity hotspots, not only a visualization. You can also explore the clustering result in your browser - the Google Drive visualization functionality seems to work much better than Google Earth. If you go to Istanbul or Los Angeles, you will see some artifacts - odd shaped clusters with a clearly visible spike. This is caused by the Xi extraction of clusters, which is far from perfect. At the end of a valley in the OPTICS plot, it is hard to decide whether a point should be included or not. These errors are usually the last element in such a valley, and should be removed via postprocessing. But our OpticsXi implementation is meant to be as close as possible to the published method, so we do not intend to "fix" this. Certain areas - such as Washington, DC, New York City, and the silicon valley - do not show up as clusters. The reason is probably again the Xi extraction - these region do not exhibit the steep density increase expected by Xi, but are too blurred in their surroundings to be a cluster. Hierarchical results can be found e.g. in Brasilia and Los Angeles. Compare the OPTICS results above to k-means results (below) - see why I consider k-means results to be a meaningless quantization?

Sure, k-means is fast (30 iterations; not converged yet. Took 138 minutes on a single core, with k=1000. The parallel k-means implementation in ELKI took 38 minutes on a single node, Hadoop/Mahout on 8 nodes took 131 minutes, as slow as a single CPU core!). But you can see how sensitive it is to misplaced coordinates (outliers, but mostly spam), how many "clusters" are somewhere in the ocean, and that there is no resolution on the cities? The UK is covered by 4 clusters, with little meaning; and three of these clusters stretch all the way into Bretagne - k-means clusters clearly aren't of high quality here. If you want to reproduce these results, you need to get the upcoming ELKI version (0.6.5~201410xx - the output of cluster convex hulls was just recently added to the default codebase), and of course data. The settings I used are: -dbc.in coords.tsv.gz -db.index tree.spatial.rstarvariants.rstar.RStarTreeFactory -pagefile.pagesize 500 -spatial.bulkstrategy SortTileRecursiveBulkSplit -time -algorithm clustering.optics.OPTICSXi -opticsxi.xi 0.01 -algorithm.distancefunction geo.LngLatDistanceFunction -optics.epsilon 5000.0 -optics.minpts 1000 -resulthandler KMLOutputHandler -out /tmp/out.kmz and the total runtime for 23 million points on a single core was about 29 hours. The indexes helped a lot: less than 10000 distances were computed per point, instead of 23 million - the expected speedup over a non-indexed approach is 2400. Don't try this with R or Matlab. Your average R clustering algorithm will try to build a full distance matrix, and you probably don't have an exabyte of memory to store this matrix. Maybe start with a smaller data set first, then see how long you can afford to increase the data size.
Categories: FLOSS Project Planets

Aleksander Morgado: GUADEC-ES 2014: Zaragoza (Spain), 24th-26th October

GNU Planet! - Thu, 2014-10-23 04:25

A short notice to remind everyone that this weekend a bunch of GNOME hackers and users will be meeting in the beautiful city of Zaragoza (*) (Spain) for the Spanish-speaking GUADEC. The schedule is already available online:


Of course, non-Spanish-speaking people are also very welcome :)

See you there!

(*) Hoping not to make enemies: Zárágozá.

Filed under: GNOME Planet, GNU Planet Tagged: gnome, guadec
Categories: FLOSS Project Planets

Matthew Garrett: Linux Container Security

Planet Debian - Thu, 2014-10-23 03:47
First, read these slides. Done? Good.

Hypervisors present a smaller attack surface than containers. This is somewhat mitigated in containers by using seccomp, selinux and restricting capabilities in order to reduce the number of kernel entry points that untrusted code can touch, but even so there is simply a greater quantity of privileged code available to untrusted apps in a container environment when compared to a hypervisor environment[1].

Does this mean containers provide reduced security? That's an arguable point. In the event of a new kernel vulnerability, container-based deployments merely need to upgrade the kernel on the host and restart all the containers. Full VMs need to upgrade the kernel in each individual image, which takes longer and may be delayed due to the additional disruption. In the event of a flaw in some remotely accessible code running in your image, an attacker's ability to cause further damage may be restricted by the existing seccomp and capabilities configuration in a container. They may be able to escalate to a more privileged user in a full VM.

I'm not really compelled by either of these arguments. Both argue that the security of your container is improved, but in almost all cases exploiting these vulnerabilities would require that an attacker already be able to run arbitrary code in your container. Many container deployments are task-specific rather than running a full system, and in that case your attacker is already able to compromise pretty much everything within the container. The argument's stronger in the Virtual Private Server case, but there you're trading that off against losing some other security features - sure, you're deploying seccomp, but you can't use selinux inside your container, because the policy isn't per-namespace[2].

So that seems like kind of a wash - there's maybe marginal increases in practical security for certain kinds of deployment, and perhaps marginal decreases for others. We end up coming back to the attack surface, and it seems inevitable that that's always going to be larger in container environments. The question is, does it matter? If the larger attack surface still only results in one more vulnerability per thousand years, you probably don't care. The aim isn't to get containers to the same level of security as hypervisors, it's to get them close enough that the difference doesn't matter.

I don't think we're there yet. Searching the kernel for bugs triggered by Trinity shows plenty of cases where the kernel screws up from unprivileged input[3]. A sufficiently strong seccomp policy plus tight restrictions on the ability of a container to touch /proc, /sys and /dev helps a lot here, but it's not full coverage. The presentation I linked to at the top of this post suggests using the grsec patches - these will tend to mitigate several (but not all) kernel vulnerabilities, but there's tradeoffs in (a) ease of management (having to build your own kernels) and (b) performance (several of the grsec options reduce performance).

But this isn't intended as a complaint. Or, rather, it is, just not about security. I suspect containers can be made sufficiently secure that the attack surface size doesn't matter. But who's going to do that work? As mentioned, modern container deployment tools make use of a number of kernel security features. But there's been something of a dearth of contributions from the companies who sell container-based services. Meaningful work here would include things like:

  • Strong auditing and aggressive fuzzing of containers under realistic configurations
  • Support for meaningful nesting of Linux Security Modules in namespaces
  • Introspection of container state and (more difficult) the host OS itself in order to identify compromises

These aren't easy jobs, but they're important, and I'm hoping that the lack of obvious development in areas like this is merely a symptom of the youth of the technology rather than a lack of meaningful desire to make things better. But until things improve, it's going to be far too easy to write containers off as a "convenient, cheap, secure: choose two" tradeoff. That's not a winning strategy.

[1] Companies using hypervisors! Audit your qemu setup to ensure that you're not providing more emulated hardware than necessary to your guests. If you're using KVM, ensure that you're using sVirt (either selinux or apparmor backed) in order to restrict qemu's privileges.
[2] There's apparently some support for loading per-namespace Apparmor policies, but that means that the process is no longer confined by the sVirt policy
[3] To be fair, last time I ran Trinity under Docker under a VM, it ended up killing my host. Glass houses, etc.

Categories: FLOSS Project Planets

Python Piedmont Triad User Group: PYPTUG Meeting - October 27th

Planet Python - Thu, 2014-10-23 03:46
PYthon Piedmont Triad User Group meetingCome join PYPTUG at out next meeting (October 27th 2014) to learn more about the Python programming language, modules and tools. Python is the perfect language to learn if you've never programmed before, and at the other end, it is also the perfect tool that no expert would do without.

WhatMeeting will start at 5:30pm.

We will open on an Intro to PYPTUG and on how to get started with Python, PYPTUG activities and members projects, then on to News from the community.

This month we will have a tutorial review followed by a main talk.

Internet Tutorial ReviewContinuing on the review last month of Gizeh (Cairo for Tourists), this month we will review an Internet tutorial on creating digital coupons: "Branded MMS coupon generation", and a few ways that this could be made better. This should be of interest to many: mobile devs, devops, infrastructure architects, web app devs, marketers, CIO/CTOs.
Main Talkby Francois Dion
Title: "Mystery Python Theater 3K: What should be your next step"

Bio: Francois Dion is the founder of PYPTUG. In the few words of his blog's profile he is an "Entrepreneur, Hacker, Mentor, Polyglot, Polymath, Musician, Photographer"

Abstract: Francois will talk about Python 3, why it should be on your radar, what are some differences, how you should prepare for a transition. He will also review some specific cases in different fields, and what kind of changes had to be done, such as the case of the MMA software, a "Band in a box" style python program that let's you create full midi scores from basic chords, with Python 2 or 3.

 Lightning talks!

We will have some time for extemporaneous "lightning talks" of 5-10 minute duration. If you'd like to do one, some suggestions of talks were provided here, if you are looking for inspiration. Or talk about a project you are working on.

WhenMonday, October 27th 2014
Meeting starts at 5:30PM

WhereWake Forest University,
close to Polo Rd and University Parkway:

Manchester Hall
room: Manchester 241  Wake Forest University, Winston-Salem, NC 27109

 Map this

See also this campus map (PDF) and also the Parking Map (PDF) (Manchester hall is #20A on the parking map)

And speaking of parking:  Parking after 5pm is on a first-come, first-serve basis.  The official parking policy is:
"Visitors can park in any general parking lot on campus. Visitors should avoid reserved spaces, faculty/staff lots, fire lanes or other restricted area on campus. Frequent visitors should contact Parking and Transportation to register for a parking permit." Mailing List
Don't forget to sign up to our user group mailing list:


It is the only step required to become a PYPTUG member.

Meetup Group
In order to get a feel for how much food we'll need, we ask that you register your attendance to this meeting on meetup:

Categories: FLOSS Project Planets

Mike Stiv - Drupal developer and consultant: Drush pro for the lazy: Aliases

Planet Drupal - Thu, 2014-10-23 03:00

Drush aliases allow us to execute commands on a remote site from the local console. It is the perfect tool for the lazy drupal developer. With drush aliases I rarely login to a remote server, I execute all the drush commands from my local console. It is also a great for workflow automation. Continue reading to help you set up your aliases.

Categories: FLOSS Project Planets

Montreal Python User Group: Mercurial: An easy and powerful alternative to git!

Planet Python - Thu, 2014-10-23 00:00

You have probably heard about git and you understand that source control is a good thing. Who made this change? When? Why? How do I go back to before this change happened? Which change broke the code? How do I combine two different streams of the same code? How do I collaborate with others? How do I collaborate with my past self, who knew things that my present self has forgotten?

Mercurial answers all of these questions! Like git and others, Mercurial is a distributed version control system (DVCS). Big players like Python, Mozilla, Facebook and others use it to keep track of their source code. Many hosting services exist for it, such as Mozdev, Google Code, or Bitbucket.

During our workshop, we will introduce the basics of using DVCS and how to configure and use Mercurial to suit your needs. The presentation will be in English, but we encourage questions and discussions in French.

Just bring your laptop, we'll have power and wifi!


Room A-3230
École de Technologie Supérieure
1100 Rue Notre-Dame Ouest
Montréal, QC H3C 1K3 (Canada)


Novembre 6th, at 6pm, until 9pm



Categories: FLOSS Project Planets
Syndicate content