FLOSS Project Planets

Daniel Bader: How to Install and Uninstall Python Packages Using Pip

Planet Python - Mon, 2017-07-24 20:00
How to Install and Uninstall Python Packages Using Pip

A step-by-step introduction to basic Python package management skills with the “pip” command. Learn how to install and remove third-party modules from PyPI.

Python is approaching its third decade of good old age, and over the years many people have contributed to the creation of Python packages that perform specific functions and operations.

As of this writing, there are ~112K packages listed on the PyPI website. PyPI is short for “Python Package Index”, a central repository for free third-party Python modules.

This large and convenient module ecosystem is what makes Python so great to work with:

You see, most Python programmers are really assemblers of Python packages, which take care of a big chunk of the programming load required by modern applications.

Chances are that there is more than one Python package ready to be unleashed and help you with your specific programming needs.

For instance, while reading dbader.org, you may notice that the pages on the site render emoji quite nicely. You may wonder…

I’d like to use emoji on my Python app!

Is there a Python package for that?

Let’s find out!

Here’s what we’ll cover in this tutorial:

  1. Finding Python Packages
  2. What to Look for in a Python Package
  3. Installing Python Packages With Pip
  4. Capturing Installed Python Packages with Requirements Files
  5. Visualizing Installed Packages
  6. Installing Python Packages From a requirements.txt File
  7. Uninstalling Python Packages With Pip
  8. Summary & Conclusion
Finding Python Packages

Let’s use the emoji use case as an example. We find emoji related Python packages by visiting the PyPI website and searching for emoji via the search box on the top right corner of the page.

As of this writing, PyPI lists 94 packages, of which a partial list is shown below.

Notice the “Weight*” header of the middle column. That’s a key piece of information. The weight value is basically a search scoring number, which the site calculates for each package to rank them and list them accordingly.

If we read the footnote it tells us that the number is calculated by “the occurrence of search term weighted by field (name, summary, keywords, description, author, maintainer).”

Does that mean that the top one is the best package?

Not necessarily. Although uncommon, a package maintainer may stuff emoji into every field to try to top rank the package, which could well happen.

Conversely, many developers don’t do their homework and don’t bother filling out all the fields for their packages, which results in those packages being ranked lower.

You still need to research the packages listed, including a consideration for what your specific end use may be. For instance, a key question could be:

Which environment do you want to implement emoji on? A terminal-based app, or perhaps a Django web app?

If you are trying to display emoji on a django web app, you may be better off with the 10th package down the list shown above (package django-emoji 2.2.0).

For our use case, let’s assume that we are interested in emoji for a terminal based Python app.

Let’s check out the first one on our list (package emoji 0.4.5) by clicking on it.

What to Look for in a Python Package

The following are characteristics of a good Python package:

  1. Decent documentation: By reading it we can get a clue as to whether the package could meet our need or not;
  2. Maturity and stability: It’s been around for some time, proven by both its age and its successive versions;
  3. Number of contributors: Healthy packages (especially complex ones) tend to have a healthy number of maintainers;
  4. Maintenance: It undergoes maintenance on a regular basis (we live in an ever-evolving world).

Although I would check it out, I wouldn’t rely too much on the development status listed for each package, that is, whether it’s a 4 - Beta or 5 - Production/Stable package. That classification is in the eye of the package creator and not necessarily reliable.

On our emoji example, the documentation seems decent. At the top of the page, we get a graphical indication of the package at work (see snippet below), which shows emoji on a Python interpreter. Yay!

The documentation for our emoji package also tells us about installing it, how to contribute to its development, etc., and points us to a GitHub page for the package, which is a great source of useful information about it.

By visiting its GitHub page, we can glean from it that the package has been around for at least two years, was last maintained in the past couple of months, has been starred 300+ times, has been forked 58 times, and has 10 contributors.

It’s looking good! We have identified a good candidate to incorporate emoji-ing into our Python terminal app.

How do we go about installing it?

Installing Python Packages With Pip

At this time, I am assuming that you already have Python installed on your system. There is plenty of info out there as to how to accomplish that.

Once you install Python, you can check whether pip is installed by running pip --version on a terminal.

I get the following output:

$ pip --version pip 9.0.1 from /Library/Frameworks/Python.framework/↵ Versions/3.5/lib/python3.5/site-packages (python 3.5)

Since Python 3.4, pip is bundled with the Python installation package. If for some reason it is not installed, go ahead and get it installed.

I highly recommend also that you use a virtual environment (and more specifically, virtualenvwrapper), a set of extensions that…

…include wrappers for creating and deleting virtual environments and otherwise managing your development workflow, making it easier to work on more than one project at a time without introducing conflicts in their dependencies.

For this tutorial, I have created a virtual environment called pip-tutorial, which you will see going forward. My other tutorial walks you through setting up Python and virtualenvwrapper on Windows.

Below you’ll see how package dependencies can bring complexity into our already complex development environments, which is the reason why using virtual environments is a must for Python development.

A great place to start learning about a terminal program is by running it without any options on the terminal. So, on your terminal, run pip. You would get a list of Commands and General Options.

Below is a partial list of the results on my terminal:

From there on you could run pip install --help to read on what the install command does and what you need to specify to run it, for example. Of course, reading the pip documentation is another great place to start.

$ pip install --help Usage: pip install [options] <requirement specifier> [package-index-options] ... pip install [options] -r <requirements file> [package-index-options] ... pip install [options] [-e] <vcs project url> ... pip install [options] [-e] <local project path> ... pip install [options] <archive url/path> ... Description: Install packages from: - PyPI (and other indexes) using requirement specifiers. - VCS project urls. - Local project directories. - Local or remote source archives. pip also supports installing from "requirements files", which provide an easy way to specify a whole environment to be installed. Install Options: ...

Let’s take a quick detour and focus on the freeze command next, which will be a key one in dealing with dependencies. Running pip freeze displays a list of all installed Python packages. If I run it with my freshly installed virtual environment active, I should get an empty list, which is the case:

$ pip freeze

Now we can get the Python interpreter going by typing python on our terminal. Once that’s done, let’s try to import the emoji module, upon which python will complain that there isn’t such a module installed, and rightfully so for we haven’t installed it, yet:

$ python Python 3.5.0 (default) [GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.38)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import emoji Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'emoji'

To finally install the package, we can go ahead and run pip install emoji on our terminal. I get the following output:

$ pip install emoji==0.4.5 Collecting emoji==0.4.5 Installing collected packages: emoji Successfully installed emoji-0.4.5
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-07-24

Planet Apache - Mon, 2017-07-24 19:58
Categories: FLOSS Project Planets

Freelock : How do you keep a high bar of quality on dozens of sites every day?

Planet Drupal - Mon, 2017-07-24 19:33

DevOps is the union of development, operations, and quality assurance -- but it's really the other way around. You start with the quality -- developing tests to ensure that things that have broken in the past don't break in the future, making sure the production environment is in a known, fully reproducible state, and setting protections in place so you can roll back to this state if anything goes wrong.

DevOpsBehavior Driven DevelopmentDrupalWordPressContinuous DeploymentContinuous IntegrationDrupal Planet
Categories: FLOSS Project Planets

Bits from Debian: DebConf17 Schedule Published!

Planet Debian - Mon, 2017-07-24 19:15

The DebConf17 orga team is proud to announce that over 120 activities have been scheduled so far, including 45- and 20-minute talks, team meetings, and workshops, as well as a variety of other events.

Most of the talks and BoFs will be streamed and recorded, thanks to our amazing video team!

We'd like to remind you that Saturday August 5th is also our much anticipated Open Day! This means a program for a wider audience, including special activities for newcomers, such as an AMA session about Debian, a beginners workshop on packaging, a thoughtful talk about freedom with regard to today's popular gadgets and more.

In addition to the published schedule, we'll provide rooms for ad-hoc sessions where attendees will be able to schedule activities at any time during the whole conference.

The current schedule is available at https://debconf17.debconf.org/schedule/

This is also available through an XML feed. You can use ConfClerk in Debian to consume this, or Giggity on Android devices: https://debconf17.debconf.org/schedule/mobile/

We look forward to seeing you in Montreal!

Categories: FLOSS Project Planets

Norbert Preining: Garmin fenix 5x – broken by design

Planet Debian - Mon, 2017-07-24 19:11

Some month ago I upgrade my (European) fenix3 to a (Japanese) fenix 5x, looking forward to the built-in maps as well as support for Japanese. I was about to write a great review, how content I have been with the fenix 3 and how much better the 5x is. Well, until I realized that Garmin’s engineers seem to be brain-damaged and shipping broken by design devices: Just one word: Set an alarm on the watch, and wonder …

.. because you will never wake up the next day, since the alarm was deleted due to an unattended (so-called) sync operation. Happened to me, several times, worst was when I was with clients working as guide.

So the problem is well known, see this link and this link and this link, and it is not restricted to the fenix. The origin of the problem is that Garmin is incapable of implementing a synchronization protocol. They have three sources of data: The Connect website, the Connect application on the smartphone, and the watch itself. And interestingly they only do pushes from Web to application to device, overwriting settings on the lower end. Which means, any alarm that I create on the watch will be overwritten, ie deleted, on every sync – sorry, not sync, on every forced push from the mobile.

It is a bit depressing, but I think the urgent solution is to exempt alarms from the synchronization completely, and remove them from both the connect web page and the application, until a better, a real, synchronization is implemented.

I found a work-around in one of the threads on the Garmin forum, that is: create an alarm on the web or application, better alarms for each of the times you might need it. These will be synced to the device, and can be activated/deactivated on the device without problems. But to be honest – this is not a solution I can really trust in when I need to get up in the morning.

Besides this issue, which is unfortunately rather severe a restriction for me as mountaineer and mountain guide, I really love the watch and will write a more detailed review soon.

Categories: FLOSS Project Planets

Ortwin Glück: [Code] ratelimiting eject

Planet Apache - Mon, 2017-07-24 16:02
Mapped the eject key of my old iMac to the following script to rate limit it: #!/bin/sh L="/tmp/eject.lock" [ -e $L ] || touch $L exec 3<$L flock -n 3 || exit eject "$@" sleep 5 Because I found it really annoying when the kids fill the keyboard buffer with eject events that take forever to be processed.
Categories: FLOSS Project Planets

Jonathan McDowell: Learning to love Ansible

Planet Debian - Mon, 2017-07-24 13:41

This post attempts to chart my journey towards getting usefully started with Ansible to manage my system configurations. It’s a high level discussion of how I went about doing so and what I got out of it, rather than including any actual config snippets - there are plenty of great resources out there that handle the actual practicalities of getting started much better than I could.

I’ve been convinced about the merits of configuration management for machines for a while now; I remember conversations about producing an appropriate set of recipes to reproduce our haphazard development environment reliably over 4 years ago. That never really got dealt with before I left, and as managing systems hasn’t been part of my day job since then I never got around to doing more than working my way through the Puppet Learning VM. I do, however, continue to run a number of different Linux machines - a few VMs, a hosted dedicated server and a few physical machines at home and my parents’. In particular I have a VM which handles my parents’ email, and I thought that was a good candidate for trying to properly manage. It’s backed up, but it would be nice to be able to redeploy that setup easily if I wanted to move provider, or do hosting for other domains in their own VMs.

I picked Ansible, largely because I wanted something lightweight and the agentless design appealed to me. All I really need to do is ensure Python is on the host I want to manage and everything else I can bootstrap using Ansible itself. Plus it meant I could use the version from Debian testing on my laptop and not require backports on the stable machines I wanted to manage.

My first attempt was to write a single Ansible YAML file which did all the appropriate things for the email VM; installed Exim/Apache/Roundcube, created users, made sure the appropriate SSH keys were in place, installed configuration files, etc, etc. This did the job, but I found myself thinking it was no better than writing a shell script to do the same things.

Things got a lot better when instead of concentrating on a single host I looked at what commonality was shared between hosts. I started with simple things; Debian is my default distro so I created an Ansible role debian-system which configured up APT and ensured package updates were installed. Then I added a task to setup my own account and install my SSH keys. I was then able to deploy those 2 basic steps across a dozen different machine instances. At one point I got an ARM64 VM from Scaleway to play with, and it was great to be able to just add it to my Ansible hosts file and run the playbook against it to get my basic system setup.

Adding email configuration got trickier. In addition to my parents’ email VM I have my own email hosted elsewhere (along with a whole bunch of other users) and the needs of both systems are different. Sitting down and trying to manage both configurations sensibly forced me to do some rationalisation of the systems, pulling out the commonality and then templating the differences. Additionally I ended up using the lineinfile module to edit the Debian supplied configurations, rather than rolling out my own config files. This helped ensure more common components between systems. There were also a bunch of differences that had grown out of the fact each system was maintained by hand - I had about 4 copies of each Let’s Encrypt certificate rather than just putting one copy in /etc/ssl and pointing everything at that. They weren’t even in the same places on different systems. I unified these sorts of things as I came across them.

Throughout the process of this rationalisation I was able to easily test using containers. I wrote an Ansible role to create systemd-nspawn based containers, doing all of the LVM + debootstrap work required to produce a system which could then be managed by Ansible. I then pointed the same configuration as I was using for the email VM at this container, and could verify at each step along the way that the results were what I expected. It was still a little nerve-racking when I switched over the live email config to be managed by Ansible, but it went without a hitch as hoped.

I still have a lot more configuration to switch to being managed by Ansible, especially on the machines which handle a greater number of services, but it’s already proved extremely useful. To prepare for a jessie to stretch upgrade I fired up a stretch container and pointed the Ansible config at it. Most things just worked and the minor issues I was able to fix up in that instance leaving me confident that the live system could be upgraded smoothly. Or when I want to roll out a new SSH key I can just add it to the Ansible setup, and then kick off an update. No need to worry about whether I’ve updated it everywhere, or correctly removed the old one.

So I’m a convert; things were a bit more difficult by starting with existing machines that I didn’t want too much disruption on, but going forward I’ll be using Ansible to roll out any new machines or services I need, and expect that I’ll find that new deployment to be much easier now I have a firm grasp on the tools available.

Categories: FLOSS Project Planets

Python Anywhere: Outage report: 20, 21 and 22 July 2017

Planet Python - Mon, 2017-07-24 12:44

We had several outages over the last few days. The problem appears to be fixed now, but investigations into the underlying cause are still underway. This post is a summary of what happened, and what we know so far. Once we've got a better understanding of the issue, we'll post more.

It's worth saying at the outset that while the problems related to the way we manage our users' files, those files themselves were always safe. While availability problems are clearly a big issue, we regard data integrity as more important.

20 July: the system update

On Thursday 20 July, at 05:00 UTC, we released a new system update for PythonAnywhere. This was largely an infrastructural update. In particular, we updated our file servers from Ubuntu 14.04 to 16.04, as part of a general upgrade of all servers in our cluster.

File servers are, of course, the servers that manage the files in your PythonAnywhere disk storage. Each server handles the data for a specific set of users, and serves the files up to the other servers in the cluster that need them -- the "execution" servers where your websites, your scheduled tasks, and your consoles run. The files themselves are stored on network-attached storage (and mirrored in realtime to redundant disks on a separate set of backup servers); the file servers simply act as NFS servers and manage a few simple things like disk quotas.

While the system update took a little longer than we'd planned, once everything was up and running, the system looked stable and all monitoring looked good.

20 July: initial problems

At 12:07 UTC our monitoring system showed a very brief issue. From some of our web servers, it appeared that access to one of our file servers, file-2, had very briefly slowed right down -- it was taking more than 30 seconds to list the specific directory that is monitored. The problem cleared up after about 60 seconds. Other file servers were completely unaffected. We did some investigations, but couldn't find anything, so we chalked it up as a glitch and kept an eye out for further problems.

At 14:12 UTC it happened again, and then over the course of the afternoon, the "glitches" became more frequent and started lasting longer. We discovered that the symptom from the file server's side was that all of the NFS daemons -- the processes that together make up an NFS server -- would all become busy; system load would rise from about 1.5 to 64 or so. They were all waiting uninterruptably on what we think was disk I/O (status "D" in top).

The problem only affected file-2 -- other file servers were all fine. Given that every file server had been upgraded to an identical system image, our initial suspicion was that there might be some kind of hardware problem. At 17:20 UTC we got in touch with AWS to discuss whether this was likely.

By 19:10 our discussions with AWS had revealed nothing of interest. The "glitches" had become a noticeable problem for users, and we decided that while there was no obvious sign of hardware problems, it would be good to at least eliminate that as a possible cause, so we took a snapshot of all disks containing user data (for safety), then migrated the server to new hardware, causing a 20-minute outage for users on that file server (who were already seeing a serious slowdown anyway), and a 5-minute outage for everyone else, the latter because we had to reboot the execution servers

After this move, at 19:57 UTC, everything seemed OK. Our monitoring was clear, and the users we were in touch with confirmed that everything was looking good.

21 July: the problem continues

At 14:31 UTC on 21 July, we saw another glitch on our monitoring. Again, the problem cleared up quickly, but we started looking again into what could possibly be the cause. There were further glitches at 15:17 and 16:51, but then the problem seemed to clear up.

Unfortunately at 22:44 it flared up again. Again, the issues started happening more frequently, and lasting longer each time, until they became very noticeable for our users at around 00:30 UTC. At 00:55 UTC we decided to move the server to different hardware again -- there's no official way to force a move to new hardware on AWS; stopping and starting an instance usually does it, but there's a small chance you'd end up on the same physical host again, so a second attempt seemed worth-while. If nothing else, it would hopefully at least clear things up for another 12 hours or so and buy us time to work out what was really going wrong.

This time, things didn't go according to plan. The file server failed to come up on the new hardware, and trying to move again did not help. We decided that we were going to need to provision a completely fresh file server, and move the disks across. While we have processes in place for replacing file servers as part of a normal system update, and for moving them to new hardware without changing (for example) their IP address, replacing one under these circumstances is not a procedure we've done before. Luckily, it went as well as could be expected under the circumstances. At 01:23 UTC we'd worked out what to do and started the new file server. By 01:50 we'd started the migration, and by 02:20 UTC everything was moved over. There were a few remaining glitches, but these were cleared up by 02:45 UTC.

23 July -- more problems -- and a resolution?

We did not expect the fix we'd put in to be a permanent solution -- though we did have a faint hope that perhaps the problem had been caused by some configuration issue on file-2, which might have been remediated by our having provisioned a new server rather than simply moving the old one. This was never a particularly strong hope, however, and when the problems started again at 12:16 UTC we weren't entirely surprised.

We had generated two new hypotheses about the possible cause of these issues by now:

  • The number of NFS daemons running on the machine was either too low or too high. Before our upgrade, we'd been running 8 daemons, and we'd moved to running 64 -- which we believed was the right number given the size of the machines in question, but of course we could have been wrong.
  • The upgrade from Ubuntu 14.04 to 16.04 could have introduced some kind of bug at the NFS level.

The problem with both of these hypotheses was that only one of our file servers was affected. All file servers had the same number of workers, and all had been upgraded to 16.04.

Still, it was worth a try, we thought. We decided to try changing the number of daemon processes first, as it we believed it would cause minimal downtime; however, we started up a new file server on 14.04 so that it would be ready just in case.

At 14:41 UTC we reduced the number of workers down to eight. We were happy to see that this was picked up across the cluster without any need to reboot anything, so there was no downtime.

Unfortunately, at 15:04, we saw another problem. We decided to spend more time investigating a few ideas that had occurred to us before taking the system down again. At 19:00 we tried increasing the number of NFS processes to 128, but that didn't help. At 19:23 we decided to go ahead with switching over to the 14.04 server we'd prepared earlier. We kicked off some backup snapshots of the user data, just in case there were problems, and at 19:38 we started the migration over.

This completed at 19:46, but required a restart of all of the execution servers in order to pick up the new file server. We started this process immediately, and web servers came back online at 19:48, consoles at 19:50, and scheduled tasks at 19:55.

By 20:00 we were satisfied that everything looked good, and so we went back to monitoring.

Where we are now

Since the update on Saturday, there were no monitoring glitches at all on Sunday, but we did see one potential problem on Monday at 12:03. However this blip was only noticed from one of our web servers (previous issues affected at least three at a time, and sometimes as many as nine), and the problem has not been followed by any subsequent outages in the 4 hours since, which is somewhat reassuring.

We're continuing to monitor closely, and are brainstorming hypotheses to explain what might have happened (or, perhaps still be happening). Of particular interest is the fact that this issue only affected one of our file servers, despite all of them having been upgraded. One possibility we're considering is that the correlation in timing with the upgrade is simply a red herring -- that instead there's some kind of access pattern, some particular pattern of reads/writes to the storage, which only started at around midday on Thursday after the system update. We're planning possible ways to investigate that should the problem occur again.

Either way, whether the problem is solved now or not, we clearly have much more investigation to do. We'll post again when we have more information.

Categories: FLOSS Project Planets

Will Kahn-Greene: Soloists: code review on a solo project

Planet Python - Mon, 2017-07-24 12:00
Summary

I work on some projects with other people, but I also spend a lot of time working on projects by myself. When I'm working by myself, I have difficulties with the following:

  1. code review
  2. bouncing ideas off of people
  3. peer programming
  4. long slogs
  5. getting help when I'm stuck
  6. publicizing my work
  7. dealing with loneliness
  8. going on vacation

I started a #soloists group at Mozilla figuring there are a bunch of other Mozillians who are working on solo projects and maybe if we all work alone together, then that might alleviate some of the problems of working solo. We hang out in the #soloists IRC channel on irc.mozilla.org. If you're solo, join us!

I keep thinking about writing a set of blog posts for things we've talked about in the channel and how I do things. Maybe they'll help you.

This one covers code review.

Read more… (10 mins to read)

Categories: FLOSS Project Planets

Doug Hellmann: hmac — Cryptographic Message Signing and Verification — PyMOTW 3

Planet Python - Mon, 2017-07-24 09:00
The HMAC algorithm can be used to verify the integrity of information passed between applications or stored in a potentially vulnerable location. The basic idea is to generate a cryptographic hash of the actual data combined with a shared secret key. The resulting hash can then be used to check the transmitted or stored message … Continue reading hmac — Cryptographic Message Signing and Verification — PyMOTW 3
Categories: FLOSS Project Planets

Mediacurrent: Creating Content with YAML Content Module

Planet Drupal - Mon, 2017-07-24 08:37

In my previous post, I introduced the YAML Content module and described the goal and usage for it at a high level. In this post, I aim to provide a more in-depth look at how to write content for the module and how to take advantage of a couple of the more advanced options included.
 

Categories: FLOSS Project Planets

Python Software Foundation: 2017 Bylaw Changes

Planet Python - Mon, 2017-07-24 07:35
The PSF has changed its bylaws, following a discussion and vote among the voting members. I'd like to publicly explain those changes.
For each of the changes, I will describe  1.) what the bylaws used to say prior to June 2017 2.) what the new bylaws say and 3.) why the changes were implemented.
Certification of Voting Members
  • What the bylaws used to say
Every member had to acknowledge that they wanted to vote/or not vote every year.
  • What the bylaws now say
The bylaws now say that the list of voters is based on criteria decided upon by the board.
  • Why was this change made?
The previous bylaws pertaining to this topic created too much work for our staff to handle and sometimes it was not done because we did not have the time resources to do it. We can now change the certification to something more manageable for our staff and our members.
Voting in New PSF Fellow Members
  • What the bylaws used to say
We did not have a procedure in place for this in the previous bylaws.
  • What the bylaws now say
Now the bylaws allow any member to nominate a Fellow. Additionally, it gives the chance for the PSF Board to create a work group for evaluating the nominations.
  • Why was this change made?
We lacked a procedure. We had several inquiries and nominations in the past, but did not have a policy to respond with. Now that we voted in this bylaw, the PSF Board voted in the creation of the Work Group. We can now begin accepting new Fellow Members after several years. Staggered Board Terms
  • What the bylaws used to say
We did not have staggered board terms prior to June 2017. Every director would be voted on every term.
  • What the bylaws now say
The bylaws now say that in the June election, the top 4 voted directors would hold 3 year terms, the next 4 voted-in directors hold 2 year terms and the next 3 voted-in directors hold 1 year terms. That resulted in:
  1. Naomi Ceder (3 yr)
  2. Eric Holscher (3 yr)
  3. Jackie Kazil (3 yr)
  4. Paul Hildebrandt (3 yr)
  5. Lorena Mesa (2 yr)
  6. Thomas Wouters (2 yr)
  7. Kushal Das (2 yr)
  8. Marlene Mhangami (2 yr)
  9. Kenneth Reitz (1 yr)
  10. Trey Hunner (1 yr)
  11. Paola Katherine Pacheco (1 yr)
  • Why was this change made?
The main push behind this change is continuity. As the PSF continues to grow, we are hoping to make it more stable and sustainable. Having some directors in place for more than one year will help us better complete short-term and long-term projects. It will also help us pass on context from previous discussions and meetings. Direct Officers
  • What the bylaws used to say
We did not have Direct Officers prior to June 2017.
  • What the bylaws now say
The bylaws state that the current General Counsel and Director of Operations will be the Direct Officers of the PSF. Additionally, they state that the Direct Officers become the 12th and 13th members of the board giving them rights to vote on board business. Direct Officers can be removed by a.) fail of an approval vote, held on at least the same schedule as 3-year-term directors; b) leave the office associated with the officer director position; or c) fail a no-confidence vote.
  • Why was this change made?
In an effort to become a more stable and mature board, we are appointing two important positions to be directors of the board. Having the General Counsel and Director of Operations on the board helps us have more strength with legal situations and how the PSF operates. The two new Direct Officers are:
  1. Van Lindberg
  2. Ewa Jodlowska
Delegating Ability to Set Compensation
  • What the bylaws used to say
The bylaws used to state that the President of the Foundation would direct how compensation of the Foundation’s employees was decided.
  • What the bylaws now say
The bylaws have changed so that the Board of Directors decide how employee compensation is decided.
  • Why was this change made?
This change was made because even though we keep the president informed of major changes, Guido does not participate in day to day operations nor employee management. We wanted the bylaws to clarify the most effective and fair way we set compensation for our staff.
We hope this breakdown sheds light on the changes and why they were important to implement. Please feel free to contact me with any questions or concerns.
Categories: FLOSS Project Planets

A. Jesse Jiryu Davis: Vote For Your Favorite PyGotham Talks

Planet Python - Mon, 2017-07-24 06:57

We received 195 proposals for talks at PyGotham this year. Now we have to find the best 50 or so. For the first time, we’re asking the community to vote on their favorite talks. Voting will close August 7th; then I and my comrades on the Program Committee will make a final selection.

Your Mission, If You Choose To Accept It

We need your help judging which proposals are the highest quality and the best fit for our community’s interests. For each talk we’ll ask you one question: “Would you like to see this talk at PyGotham?” Remember, PyGotham isn’t just about Python: it’s an eclectic conference about open source technology, policy, and culture.

You can give each talk one of:

  • +1 “I would definitely like to see this talk”
  •  0 “I have no preference on this talk”
  • -1 “I do not think this talk should be in PyGotham”

You can sign up for an account and begin voting at vote.pygotham.org. The site presents you with talks in random order, omitting the ones you have already voted on. For each talk, you will see this form:

Click “Save Vote” to make sure your vote is recorded. Once you do, a button appears to jump to the next proposal.

Our thanks to Ned Jackson Lovely, who made this possible by sharing the talk voting app “progcom” that was developed for the PyCon US committee.

So far, about 50 people have cast votes. We need to hear from you, too. Please help us shape this October’s PyGotham. Vote today!

Image: Voting in Brisbane, 1937

Categories: FLOSS Project Planets

KDE Slimbook and FreeBSD

Planet KDE - Mon, 2017-07-24 06:52

Yesterday I picked up my new KDE Slimbook. It comes with KDE Neon pre-installed. Of course it also works well with openSUSE, and Manjaro, and Netrunner Linux (some things I’ve at least booted the Live CD for). But for me, “will it run FreeBSD” is actually the most important bit.

Yes. Yes it does, and it does so beautifully.

That is at least one advantage of choosing a Free Software friendly laptop, one designed for GNU/Linux: it is likely to be supported by many more operating systems that you might like. No, I have not tried OpenSolaris / Illumos on it .. there’s really no desktop distro in that corner anymore.

So, here’s how to breakupgrade your  Slimbook to FreeBSD (no warranty implied):

  • Resize the installed partition to make space for FreeBSD. I chopped 40GB off the end of the main Linux partition. This may break various crypt-setup things, be careful. For resizing, I actually used the Manjaro installer, and told it to resize an existing partition, then cancelled the install during unsquash, and then deleted the partition it had made. You can probably use resize2fs and gparted with good effect, too.
  • Install FreeBSD 12-CURRENT. I went for UFS on a single partition, no swap: that’s the easiest to get right, and avoids weirdness like zpools on a partition. It ended up in ada0p4, or sda4, or (hd0, gpt4) depending on what nomenclature you use for naming disks.
  • Oh, yeah .. don’t install a boot manager for FreeBSD. We’ll let the existing GRUB deal with it.
  • Reboot and let GRUB start Linux again. We’ll configure GRUB to (also) start FreeBSD. Add an OS entry for FreeBSD, by adding /etc/grub.d/40_custom:


    menuentry "FreeBSD" --class freebsd --class bsd --class os {
    insmod ufs2
    insmod bsd
    set root=(hd0,gpt4)
    chainloader /boot/boot1.efi
    }

    Also recommended: set timeouts so you can actually pick an OS, in /etc/grub.d/custom.cfg:


    set timeout=5
    set timeout_style=menu

  • Reboot, hit escape at the right moment to get the GRUB menu, and choose FreeBSD. Boot into FreeBSD.
  • Configure wireless networking on FreeBSD for the Slimbook. I have one with an Intel 7265 wireless card, so I needed to set that up. Since the firmware comes from the filesystem, I ended up following the quick-start guide and futzing with rc.local to load the driver. Here’s my rc.local:


    #! /bin/sh
    /sbin/kldload if_iwm
    /sbin/ifconfig wlan0 create wlandev iwm0
    :

    and this is a bit of my rc.conf:


    # Wireless
    wlans_iwm0="wlan0"
    ifconfig_wlan0="WPA SYNCDHCP"

    Bear in mind that NetworkManager and other fancy bits are not available: configure wpa_supplicant.conf by hand and add SSIDs and PSKs there.

  • Right now, Intel IGP after Broadwell (and the Slimbook is a Skylake) isn’t fully supported by the xf86-video-intel driver, so instead use scfb. This loses acceleration and some other features, but it gives you X11 right now, as opposed to sometime later when the newer drivers are merged.

    pkg install xf86-video-scfb

    Add some explicit, manual, X.org configuration:


    Section "Device"
    Identifier "Card0"
    Driver "scfb"
    EndSection

  • After that, follow my earlier Plasma 5 on FreeBSD HOWTO, including adding the Area51 repo. However, since this is 12-CURRENT, you need to use a different pkg repository URL.

This concludes my laptop-futzing-about at Akademy this year: I have a laptop that dual-boots Linux and FreeBSD, and gives me an up-to-date Plasma 5 Desktop and KDE Applications on both — but that leaves me free to hack on whatever my work requires in the OS best suited to it each day of the week.

Categories: FLOSS Project Planets

Wiki, what’s going on? (Part 24-Badges and books)

Planet KDE - Mon, 2017-07-24 06:30
Books, badges and new functionalities are coming!

Hey WikiToLearn-ers! It has been a while, but now “What’s going on?” is back.

This last period was really difficult for all of us but we never forgot of WikiToLearn. In spite of personal commitments and exams, the members of our community kept working day after day and now we see the results!

Electromagnetism is now ready!

In the first episode of “Meet the authors” we have come to know Dan, a long term contributor of ours. It is now a pleasure to announce that his book about electromagnetism is ready! Dan is one of the most active editors in our community and on our website he wrote several books on the italian portal: analysis, physics I, mechanics and electromagnetism! Kudos Daniele for your strong dedication to the project and for your contributions!

We have badges!

The tech team never stops, the platform is always at his best and our sysadmins work hard to provide you great services. When we had problems we were able to solve them very quickly and to get back on our feet! In this period

our devs worked on a long-term requested feature for our website: badges! Yeah, now WikiToLearn has badges! Since our project was born we were asked a way to certify an imported or reviewed book and to give its author/reviewer the proper merit. We are very happy to announce that this feature is now ready! When you donate a book to be imported, a proper tag will be associated to it on the website.

Professors always asked us a tag to certify their books authority. To accommodate this request now we have the “Reviewed” tag. Are you the author of a specific book and you would like it to be certified? Now with WikiToLearn you can! One step further to guarantee high-quality content and to monitor revisions on our dynamical textbooks!

Let’s celebrate!

We never forget where we belong: that’s why we remind you that during this week Akademy2017 is taking place in Almeria! Akademy is the annual conference of the KDE community. WikiToLearn was born under the KDE umbrella and still today we fell part of the KDE family. This year WikiToLearn is represented at Akademy by Vasudha, a GSoC student of ours working on Ruqola. Today we remember Akademy with extreme pleasure: two years ago, during Akademy2015 our project was officially born!

Since the first official announcement to the public, so much happened. We can consider ourselves satisfied for the hard work we did and for the initial outreach we had. People feedbacks pushed us to work better and better to improve functionalities and to satisfy users’ needs. In this period we had the occasion to spread the word about WikiToLearn and we could obtain substantial involvement in our projects. We are extremely grateful to all the members of our community for events organized, books donated for importation, reviews and creation of new material on the platform.

For us now it’s time to work even harder. During these two years we came up to realize what we were doing properly and what has to be modified. In the next few months we are working on communication and style improvements to enlarge our community.

Content creation and usability are the two main issues we are facing right now.  Any kind of user should be aware of what the platform offers and should be encouraged to write on it. WikiToLearn collects dynamical textbooks and in the incoming future this innovative feature of our product has to become our strength point!

 

We are working for you, WikiToLearn-ers! Stay tuned, spread the word, join.wikitolearn.org and start sharing your knowledge in a completely innovative way!

 

L'articolo Wiki, what’s going on? (Part 24-Badges and books) sembra essere il primo su Blogs from WikiToLearn.

Categories: FLOSS Project Planets

Catalin George Festila: Fix Gimp with python script.

Planet Python - Mon, 2017-07-24 06:24
Today I will show you how python language can help GIMP users.
From my point of view, Gimp does not properly import frames from GIF files.
This program imports GIF files in this way:

Using the python module, you can get the correct frames from the GIF file.
Here's my script that uses the python PIL module.
import sys
from PIL import Image, ImageSequence
try:
img = Image.open(sys.argv[1])
except IOError:
print "Cant load", infile
sys.exit(1)

pal = img.getpalette()
prev = img.convert('RGBA')
prev_dispose = True
for i, frame in enumerate(ImageSequence.Iterator(img)):
dispose = frame.dispose

if frame.tile:
x0, y0, x1, y1 = frame.tile[0][1]
if not frame.palette.dirty:
frame.putpalette(pal)
frame = frame.crop((x0, y0, x1, y1))
bbox = (x0, y0, x1, y1)
else:
bbox = None

if dispose is None:
prev.paste(frame, bbox, frame.convert('RGBA'))
prev.save('result_%03d.png' % i)
prev_dispose = False
else:
if prev_dispose:
prev = Image.new('RGBA', img.size, (0, 0, 0, 0))
out = prev.copy()
out.paste(frame, bbox, frame.convert('RGBA'))
out.save('result_%03d.png' % i)Name the python script with convert_gif.py and then you can use it on the GIF file as follows:
C:\Python27>python.exe convert_gif.py 0001.gifThe final result has a smaller number of images than in Gimp, but this was to be expected.

Categories: FLOSS Project Planets

Jonathan Carter: Plans for DebCamp17

Planet Debian - Mon, 2017-07-24 05:08

In a few days, I’ll be attending DebCamp17 in Montréal, Canada. Here are a few things I plan to pay attention to:

  • Calamares: My main goal is to get Calamares (ooops, they have a TSL problem currently so weblink disabled until that’s fixed) in a great state for inclusion in the archives, along with sane default configuration for Debian (calamares-settings-debian) that can easily be adapted for derivatives. Calamares itself is already looking good and might make it into the archives before DebCamp even starts, but the settings package will still need some work.
  • Gnome Shell Extensions: During the stretch development cycle I made great progress in packaging some of the most popular shell extensions, but there are a few more that I’d like to get in for buster: apt-update-indicator, dash-to-panel (done), proxy-switcher, tilix-dropdown, tilix-shortcut
  • zram-tools: Fix a few remaining issues in zram-tools and get it packaged into the archive.
  • DebConf Committee: Since January, I’ve been a delegate on the DebConf committee, I’m hoping that we get some time to catch up in person before DebConf starts. We’ve been working on some issues together recently and so far we’ve been working together really well. We’re working on keeping DebConf organisation stable and improving community processes, and I hope that by the end of DebConf we’ll have some proposals that will prevent some re-occurring problems and also help mend old wounds from previous years.
  • ISO Image Writer: I plan to try out Jonathan Riddell’s iso image writer tool and check whether it works with the use cases I’m interested in (Debian install and live media, images created with Calamares, boots UEFI/BIOS on optical and usb media). If it does what I want I’ll probably package it too if Jonathan Riddell didn’t get to it yet.
  • Hashcheck: Kyle Robertze wrote a tool called Hashcheck for checking install media checksums from a live environment. If he gets a break during DebCamp from video team stuff, I’m hoping we can look at some improvements for it and also getting it packaged in Debian.
Categories: FLOSS Project Planets

clazy 1.2 released

Planet KDE - Mon, 2017-07-24 04:42

In the previous episode we presented how to uncover 32 Qt best practices at compile time with clazy. Today it’s time to show 5 more and other new goodies present in the freshly released clazy v1.2.

New checks 1. connect-not-normalized

Warns when the content of SIGNAL(), SLOT(), Q_ARG() and Q_RETURN_ARG() is not normalized. Using normalized signatures allows to avoid unneeded memory allocations.

Example:

// warning: Signature is not normalized. Use void mySlot(int) instead of void mySlot(const int) …

The post clazy 1.2 released appeared first on KDAB.

Categories: FLOSS Project Planets

Fifth Blog Gsoc 2017

Planet KDE - Mon, 2017-07-24 03:00

Hello, this is the report for the second phase of the GSOC. The last month was not easy. Some things had to be re-written because they were not very well written. For example, I wrote a system of “sensors”, the logic of which was laid in the destructors of objects....

Categories: FLOSS Project Planets

Glassdimly tech Blog: Config Management Roundup

Planet Drupal - Mon, 2017-07-24 01:05

Quite the problem—Drupal config management on local and prod.

Drupal 8 can be quite punctilious: it will simply refuse to work if there's a mismatch between database and config objects on the filesystem.

So... how do we manage two sets of configurations—one for local, and one for production?

The Problem

I have modules like varnish installed on production that shouldn't be enabled on local, and modules that are enabled on local that shouldn't be enabled on production.

Categories: FLOSS Project Planets
Syndicate content