FLOSS Project Planets

Fabio Zadrozny: PyDev 6.3.2: support for .pyi files

Planet Python - 14 hours 13 min ago
PyDev 6.3.2 is now available for download.

The main change in this release is that PyDev will now consider .pyi (stub) files when doing type inference, although there's still a shortcoming: the .pyi file must be in the same directory where the typed .py file is and it's still not possible to use it to get type inference for modules which are compiled (for instance PyQt).
I hope to address that in the next release (initially I wanted to delay this release to add full support for .pyi files, but there was a critical bug opening the preferences page for code completion, so, it really couldn't be delayed more, nevertheless, the current support is already useful for users using .pyi files along .py files).
Also, code completion had improvements for discovering whether some call is for a bound or unbound method and performance improvements (through caching of some intermediary results during code completion).
Categories: FLOSS Project Planets

Codementor: How I learned Python for Machine Learning

Planet Python - 16 hours 45 min ago
About me I am a senior undergrad of Maths and Computing at the Indian Institute of Technology, BHU (Varanasi). Why I wanted to learn Python for Machine Learning The enthusiasm revolving Machine...
Categories: FLOSS Project Planets

Guest post: The Importance of QA

Planet KDE - 19 hours 48 min ago

Today we have a guest post from Buovjaga, our friendly local QA evangelist for LibreOffice, KDE, Inkscape, Firefox and Thunderbird. Without further ado, I’d like to present…

The Importance of QA

With this post I hope to convince you that a strong quality assurance team can do miraculous things for a free software project.

The spectrum of QA is wide, and reducing the skill requirements is particularly relevant for KDE’s onboarding initiative.

The critical phase of onboarding a new contributor is the first contact. Sometimes the new person does not know what they want to do. Often you do not have a clear picture of what skills they have. You need to act fast or they will lose interest and disappear! This is the moment where you should hand them snacks: a query of bugs that need to be confirmed or re-confirmed. This is the lowest threshold for them to step across and into being a contributor, because:

  • They do not need to learn version control
  • They do not need to learn the patch submission processes
  • They do not need to be wordsmiths
  • They do not need to know interface design or how to draw pretty pictures
  • They should not even need to know how to use the features they are testing, because a valid bug report includes clear steps on what to do!

QA is highly important in itself, but it is also a gateway drug. A simplified story of the evolution of a contributor might be as follows:

  1. They work on something meaningful
  2. They get familiar with the structure of the project
  3. They discover their own potential and the multitude of things they can help with

Not only does this evolution flow naturally through the QA team, but the experienced members are in a unique position to speed it up. This is because QA in the course of its work typically has to ferret out information from all the other teams. This leads to QA

  1. Knowing who the subject-matter experts are
  2. Discovering weak points in the organisation
  3. Helping the various teams stay in sync with each other

In this aspect QA is acting like neurotransmitters in the body of the project.

The most apparent beneficial effect of having a strong QA team is that the developers are not distracted by massive amounts of first-stage bug analysis.

Primitive development team working in the bug tracker without the luxury of a QA team

In QA, too many cooks do not spoil the broth. A large and diverse team is more effective than a small one when trying to keep up with a myriad of software and hardware configurations.
A large teams allows the freedom for members to level up their skills. The more experience on advanced triaging techniques the members have, the less work developers have to spend per bug fix.

There is a long road ahead for KDE to reach a healthy state regarding QA. Recruit contributors early and often. Aim for a feedback loop of recruiting, where even fresh contributors brainstorm to come up with ways to find new people.

I invite everyone to go through these articles and improve them:

I also recommend KDE to look into making it easy for QA to perform git bisects for pinpointing regressions. Perhaps this could be achieved by offering compressed repositories containing binary snapshots for every single commit in a project like LibreOffice does.

Categories: FLOSS Project Planets

Unsetting QT_QPA_PLATFORM environment variable by default

Planet KDE - 19 hours 57 min ago

Since the introduction of the Plasma/Wayland session we set the QT_QPA_PLATFORM variable to wayland by default. After a long and hard discussion we decided to no longer do this with Plasma 5.13. This was a hard decision and unliked by everyone involved.

The environment variable forced Qt applications to use the wayland QPA platform plugin. This showed a problem which is difficult to address: if Qt does not have the wayland QPA platform plugin the application just doesn’t start. If you start through the console, the application will tell you:

This application failed to start because it could not find or load the Qt platform plugin "wayland" in "". Available platform plugins are: minimal, xcb. Reinstalling the application may fix this problem. Aborted (core dumped)

That is not really a useful information and does not tell the user what to do. Neither does it tell the user where the actual problem is and how to solve it. As mentioned when using a graphical launcher it’s worse as the app just doesn’t start without any feedback.

But how can it happen that the qpa platform plugin is missing although Plasma itself happily uses it? The problem is that application installed outside of the system bundle their own Qt and Qt does not (yet) include QtWayland QPA platform plugin. This affects proprietary applications, FLOSS applications bundled as appimages, FLOSS applications bundled as flatpaks and not distributed by KDE and even the Qt installer itself. In my opinion this is a showstopper for running a Wayland session.

The best solution is for Qt including the QPA platform plugin and having a proper auto-detection based on XDG_SESSION_TYPE. The situation will improve with Qt 5.11, but it doesn’t really help as the Qt LTS versions will continue to face the problem.

For now we implemented a change in Plasma 5.13 so that we don’t need to set the env variable any more. Plasma is able to select the appropriate platform plugin based on XDG_SESSION_TYPE environment variable. Non-Plasma processes will use the default platform plugin. With Qt < 5.11 this is xcb, with Qt 5.11 this will most likely change to wayland. KDE’s flatpak applications pick Wayland by default in a Wayland session and are unaffected by the change.

What is really sad about the change is that the Wayland qpa platform plugin gets less testing now. So we would like to ask our users to continue testing application with the Wayland platform plugin by setting the env variable manually or specifying –platform wayland when starting an application.

Categories: FLOSS Project Planets

Kushal Das: How to delete your Facebook account?

Planet Python - 21 hours 47 sec ago

I was planning to delete my Facebook account for some time, but, never took the actual steps to do it. The recent news on how the companies are using data from Facebook made me take that next step. And I know Snowden is talking about these issues for a long time (feel free to read a recent interview), I should have done that before. I was just lazy.

First download all the current information for archive

Login to Facebook, go to your settings page. Then you can see a link saying Download a copy of your Facebook data. Click on that. It will ask your password, and then take some time to generate an archive. You can download it after some time.

Let us ask Facebook to delete the account

Warning: Once you deleted your account, you can not get back your data. So, do the next steps after think clearly (personally, I can say it is a good first step to slowly gain back privacy).

Go to this link to see the following screen.

If you click on the blue Delete my account, it will open the next screen, where it will ask you to confirm your password, and also fill in the captcha text.

After this, you will see the final screen. It will take around 90 days to delete all of your information.

Remember to use long passphrases everywhere

Now, you have deleted your account. But, remember that it is just one single step to have privacy. There various other things you can do. I think the next step should be about all of your passwords. Read this blog post about how to generate long passphrases, and use those instead of short passwords. You should also use a proper password manager to save all of these passwords.


Categories: FLOSS Project Planets

Iustin Pop: Hakyll basics

Planet Debian - Tue, 2018-03-20 22:00

As part of my migration to Hakyll, I had to spend quite a bit time understanding how it works before I became somewhat “at-home” with it. There are many posts that show “how to do x”, but not so many that explain its inner workings. Let me try to fix that: at its core, Hakyll is nothing else than a combination of make and m4 all in one. Simple, right? Let’s see :)

Note: in the following, basic proficiency with Haskell is assumed.

Monads and data types Rules

The first area (the make equivalent), more precisely the Rules monad, concerns itself with the rules for mapping source files into output files, or creating output files from scratch.

Key to this mapping is the concept of an Identifier, which is name in an abstract namespace. Most of the time—e.g. for all the examples in the upstream Hakyll tutorial—this identifier actually maps to a real source file, but this is not required; you can create an identifier from any string value.

The similarity, or relation, to file paths manifests in two ways:

  • the Identifier data type, although opaque, is internally implemented as a simple data type consisting of a file path and a “version”; the file path here points to the source file (if any), while the version is rather a variant of the item (not a numeric version!).
  • if the identifier has been included in a rule, it will have an output file (in the Compiler monad, via getRoute).

In effect, the Rules monad is all about taking source files (as identifiers) or creating them from scratch, and mapping them to output locations, while also declaring how to transform—or create—the contents of the source into the output (more on this later). Anyone can create an identifier value via fromFilePath, but “registering” them into the rules monad is done by one of:

Note: I’m probably misusing the term “registered” here. It’s not the specific value that is registered, but the identifier’s file path. Once this string value has been registered, one can use a different identifier value with a similar string (value) in various function calls.

Note: whether we use match or create doesn’t matter; only the actual values matter. So a match "foo.bar" is equivalent to create ["foo.bar"], match here takes the list of identifiers from the file-system, but does not associated them to the files themselves—it’s just a way to get the list of strings.

The second argument to the match/create calls is another rules monad, in which we’re processing the identifiers and tell how to transform them.

This transformation has, as described, two aspects: how to map the file path to an output path, via the Rules data type, and how to compile the body, in the Compiler monad.

Name mapping

The name mapping starts with the route call, which lifts the routes into the rules monad.

The routing has the usual expected functionality:

  • idRoute :: Routes, which maps 1:1 the input file name to the output one.
  • setExtension :: String -> Routes, which changes the extension of the filename, or sets it (if there wasn’t any).
  • constRoute :: FilePath -> Routes, which is special in that it will result in the same output filename, which is obviously useful only for rules matching a single identifier.
  • and a few more options, like building the route based on the identifier (customRoute), building it based on metadata associated to the identifier (metadataRoute), composing routes, match-and-replace, etc.

All in all, routes offer all the needed functionality for mapping.

Note that how we declare the input identifier and how we compute the output route is irrelevant, what matters is the actual values. So for an identifier with name (file path) foo.bar, route idRoute is equivalent to constRoute "foo.bar".


Slightly into more interesting territory here, as we’re moving beyond just file paths :) Lifting a compiler into the routes monad is done via the compile function:

compile :: (Binary a, Typeable a, Writable a) => Compiler (Item a) -> Rules ()

The Compiler monad result is an Item a which is just and identifier with a body (of type a). This type variable a means we can return any Writable item. Many of the compiler functions work with/return String, but the flexibility to use other types is there.

The functionality in this module revolves around four topics:

The current identifier

First the very straightforward functions for the identifier itself:

  • getUnderlying :: Compiler Identifier, just returns the identifier
  • getUnderlyingExtension :: Compiler String, returns the extension

And the for the body (data) of the identifier (mostly copied from the haddock of the module):

  • getResourceBody :: Compiler (Item String): returns the full contents of the matched source file as a string, but without metadata preamble, if there was one.
  • getResourceString :: Compiler (Item String), returns the full contents of the matched source file as a string.
  • getResourceLBS :: Compiler (Item ByteString), equivalent to the above but as lazy bytestring.
  • getResourceFilePath :: Compiler FilePath, returns the file path of the resource we are compiling.

More or less, these return the data to enable doing arbitrary things to it, and are at the cornerstone of a static site compiler. One could implement a simple “copy” compiler by doing just:

match "*.html" $ do -- route to the same path, per earlier explanation. route idRoute -- the compiler just returns the body of the source file. compile getResourceLBS

All the other functions in the module work on arbitrary identifiers.


I’m used to Yesod and its safe routes functionality. Hakyll has something slightly weaker, but with programmer discipline can allow similar levels of I know this will point to the right thing (and maybe correct escaping as well). Enter the:

getRoute :: Identifier -> Compiler (Maybe FilePath)

function which I alluded to earlier, and which—either for the current identifier or another identifier—returns the destination file path, which is useful for composing links (as in HTML links) to it.

For example, instead of hard-coding the path to the archive page, as /archive.html, one can instead do the following:

let archiveId = "archive.html" create [archiveId] $ do -- build here the archive page -- later in the index page create "index.html" $ do compile $ do -- compute the actual url: archiveUrl <- toUrl <$> getRoute archiveId -- then use it in the creation of the index.html page

The reuse of archiveId above ensures that if the actual path to the archive page changes (renames, site reorganisation, etc.), then all the links to it (assuming, again, discipline of not hard-coding them) are automatically pointing to the right place.

Working with other identifiers

Getting to the interesting aspect now. In the compiler monad, one can ask for any other identifier, whether it was already loaded/compiled or not—the monad takes care of tracking dependencies/compiling automatically/etc.

There are two main functions:

  • load :: (Binary a, Typeable a) => Identifier -> Compiler (Item a), which returns a single item, and
  • loadAll :: (Binary a, Typeable a) => Pattern -> Compiler [Item a], which return a list of items, based on the same patterns used in the rules monad.

If the identifier/pattern requested do not match actual identifiers declared in the “parent” rules monad, then these calls will fail (as in monadic fail).

The use of other identifiers in a compiler step is what allows moving beyond “input file to output file”; aggregating a list of pages (e.g. blog posts) into a single archive page is the most obvious example.

But sometimes getting just the final result of the compilation step (of other identifiers) is not flexible enough—in case of HTML output, this includes the entire page, including the <html><head>…</head> part, not only the body we might be interested in. So, to ease any aggregation, one uses snapshots.


Snapshots allow, well, snapshotting the intermediate result under a specific name, to allow later retrieval:

  • saveSnapshot :: (Binary a, Typeable a) => Snapshot -> Item a -> Compiler (Item a), to save a snapshot
  • loadSnapshot :: (Binary a, Typeable a) => Identifier -> Snapshot -> Compiler (Item a), to load a snapshot, similar to load
  • loadAllSnapshots :: (Binary a, Typeable a) => Pattern -> Snapshot -> Compiler [Item a], similar to loadAll

One can save an arbitrary number of snapshots at various steps of the compilation, and then re-use them.

Note: load and loadAll are actually just the snapshot variant, with a hard-coded value for the snapshot. As I write this, the value is "_final", so probably it’s best not to use the underscore prefix for one’s own snapshots. A bit of a shame that this is not done better, type-wise.

What next?

We have rules to transform things, including smart name transforming, we have compiler functionality to transform the data. But everything mentioned until now is very generic, fundamental functionality, bare-bones to the bone (ha!).

With just this functionality, you have everything needed to build an actual site. But starting at this level would be too tedious even for hard-core fans of DIY, so Hakyll comes with some built-in extra functionality.

And that will be the next post in the series. This one is too long already :)

Categories: FLOSS Project Planets

Wingware News: Wing Python IDE 6.0.11: March 21, 2018

Planet Python - Tue, 2018-03-20 21:00
This release implements auto-save and restore for remote files, adds a Russian translation of the UI (thanks to Alexandr Dragukin), improves remote development error reporting and recovery after network breaks, correctly terminates SSH tunnels when switching projects or quitting, fixes severe network slowdown seen on High Sierra, auto-reactivates expired annual licenses without restarting Wing, and makes about 20 other improvements.
Categories: FLOSS Project Planets

Justin Mason: Links for 2018-03-20

Planet Apache - Tue, 2018-03-20 19:58
  • SXSW 2018: A Look Back at the 1960s PLATO Computing System – IEEE Spectrum

    Author Brian Dear on how these terminals were designed for coursework, but students preferred to chat and play games […] “Out of the top 10 programs on PLATO running any day, most were games,” Dear says. “They used more CPU time than anything else.” In one popular game called Empire, players blast each other’s spaceships with phasers and torpedoes in order to take over planets. And PLATO had code review built into the OS: Another helpful feature that no longer exists was called Term Comment. It allowed users to leave feedback for developers and programmers at any place within a program where they spotted a typo or had trouble completing a task. To do this, the user would simply open a comment box and leave a note right there on the screen. Term Comment would append the comment to the user’s place in the program so that the recipient could easily navigate to it and clearly see the problem, instead of trying to recreate it from scratch on their own system. “That was immensely useful for developers,” Dear says. “If you were doing QA on software, you could quickly comment, and it would track exactly where the user left this comment. We never really got this on the Web, and it’s such a shame that we didn’t.”

    (tags: plato computing history chat empire gaming code-review coding brian-dear)

Categories: FLOSS Project Planets

Steinar H. Gunderson: Debian CEF packages

Planet Debian - Tue, 2018-03-20 19:38

I've created some Debian CEF packages—CEF isn't the easiest thing to package (and it takes an hour to build even on my 20-core server, since it needs to build basically all of Chromium), but it's fairly rewarding to see everything fall into place. It should benefit not only Nageru, but also OBS and potentially CasparCG if anyone wants to package that.

It's not in the NEW queue because it depends on a patch to chromium that I hope the Chromium maintainers are brave enough to include. :-)

Categories: FLOSS Project Planets

FSF Blogs: Friday Free Software Directory IRC meetup time: March 23rd starting at 12:00 p.m. EDT/16:00 UTC

GNU Planet! - Tue, 2018-03-20 18:15

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

This weekend, LibrePlanet 2018 converges on Cambridge, MA. One fantastic feature of LibrePlanet is the accessibility and potential for remote attendance because of the livestream of events. Livestreaming concurrent events is no easy feat when considering the moving pieces; all this momentum pushes development from ABYSS to HumpBack Anglerfish. This week, while we work on adding new programs, we can also look back at our favorite LibrePlanet speeches from previous years.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

Categories: FLOSS Project Planets

Friday Free Software Directory IRC meetup time: March 23rd starting at 12:00 p.m. EDT/16:00 UTC

FSF Blogs - Tue, 2018-03-20 18:15

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

This weekend, LibrePlanet 2018 converges on Cambridge, MA. One fantastic feature of LibrePlanet is the accessibility and potential for remote attendance because of the livestream of events. Livestreaming concurrent events is no easy feat when considering the moving pieces; all this momentum pushes development from ABYSS to HumpBack Anglerfish. This week, while we work on adding new programs, we can also look back at our favorite LibrePlanet speeches from previous years.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

Categories: FLOSS Project Planets

Reuven Lerner: Four ways to assign variables in Python

Planet Python - Tue, 2018-03-20 16:02

Within minutes of starting to learn Python, everyone learns how to define a variable. You can say:

x = 100

and voila!  Your You have created a variable “x”, and assigned the integer value 100 to it.  It couldn’t be simpler than that.

But guess what?  This isn’t the only way to define variables, let alone assign to them, in Python. And by understanding the other ways that we can define variables, and the nuances of the “simple” variable assignment that I described above, you can get a better appreciation for Python’s consistency, along with the idea that “everything is an object.”

Method 1: Plain ol’ assignment

The first way to define a variable is with the assignment operator, =.  Once again, I can say

x = 100

and I have assigned to x. If x didn’t exist before, then it does now. If it did exist before, then x now points to a new and different object.

What if the new, different object is also of a new and different type? We don’t really care; the nature of a dynamic language is such that any variable can point to an object of any type. When we say “type(x)” in our code, we’re not really asking what type of object the variable “x” can hold; rather, we’re asking what type of variable “x” is pointing to right now.

One of the major misconceptions that I come across in my Python courses and newsletter is just how Python allocates and stores data.  Many of my students come from a C or C++ background, and are thus used to the “box model” of variable assignment: When we declare the variable “x”, we have to declare it with a type.  The language then allocates enough memory to store that type, and gives that memory storage an alias, “x”.  When I say “x=100”, the language puts the value 100 inside of the box named “x”.  If the box isn’t big enough, bad news!

In Python, though, we don’t use the box model of variable assignment. Rather, we have the dictionary model of variable assignment: When we assign “x=100”, we’re creating (or updating) a key-value pair in a dictionary. The key’s name is “x”, and the value is 100, or anything else at all. Just as we don’t care what type of data is stored in a dictionary’s values, we also don’t care what type of value is stored inside of a Python variable’s value.

(I know, it’s a bit mind-bending to say that Python variables are stored in a dictionary, when dictionaries are themselves Python values, and are stored in Python variables. It’s turtles all the way down, as they say.)

Don’t believe me?  Just run the “globals” function in Python. You’ll get a dictionary back, in which the keys are all of the global variables you’ve defined, and the values are all of the values of those variables.  For example:

>>> x = 100 >>> y = [10, 20, 30] >>> globals() {'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <class '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {}, '__builtins__': <module 'builtins' (built-in)>, 'x': 100, 'y': [10, 20, 30]} >>> x = 200 >>> y = {100, 200, 300} >>> globals() {'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <class '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {}, '__builtins__': <module 'builtins' (built-in)>, 'x': 200, 'y': {200, 100, 300}}

But wait, we can do even better:

>>> globals()['x'] = 300 >>> x 300 >>> globals()['y'] = {'a':1, 'b':2} >>> y {'a': 1, 'b': 2}

That’s right; the result of invoking “globals” is not only a dictionary showing us all of the global variables, but is something we can modify — and whose modifications not only reflect, but also affect, our global variables.

Now, you might be thinking, “But not all Python variables are globals.” That’s true; Python’s scoping rules describe four levels: Local, Enclosing, Global, and Builtins.  Normally, assignment creates/updates either a global variable or a local one.  How does Python know which one to use?

The answer is pretty simple, actually: When you assign inside of a function, you’re working with a local variable. When you assign outside of a function, you’re working with a global variable.  It’s pretty much as simple as that.  Sure, there are some exceptions, such as when you use the “global” or “nonlocal” keyword to force Python’s hand, and create/update a variable in a non-local scope when you’re within a function.   But the overwhelming majority of the time, you can assume that assignment within a function creates or updates a local variable, and outside of a function creates or updates a global variable.

Note that this means assignment inside of a “for” loop or “if” statement doesn’t create a local variable; it creates a global one.  It’s common for my students to believe that because they are inside of an indented block, the variable they are creating is not global.  Not true!

If you’re in a function and want to see the variables that have been created, you can use the “locals” function, which operates just like the “globals” function we looked at earlier.  Even better, you can use the “vars” function, which invokes “locals” inside of a function and “globals” outside of a function.

So the first type of variable assignment in Python is also the most common, and can be subdivided into two basic categories, local and global.

I should add that I’m only talking here about variable assignment, not attribute assignment. Attributes (i.e., anything that comes after a “.” character, as “b” in the expression “a.b”) are a totally different kettle of fish, and have their own rules and quirks.

Method 2: def

If you want to create a function in Python, you use the “def” keyword, as in:

>> def hello(name): ... return f"Hello, {name}"    # I love f-strings!

While we often think that “def” simply defines a function, I think it’s easier to think of “def” as actually doing two things:

  • Creating a new function object
  • Assigning that function object to the name immediately following the “def”

Thinking about “def” this way, as object creation + assignment, makes it easier to understand a variety of different situations that Python developers encounter.

First of all, “def” defines variables in the same scope as a simple variable assignment would. So if you’re in the global scope, you’re creating a global variable. Remember that Python doesn’t have separate namespaces for data and functions. This means that if you’re not careful, you can accidentally destroy your function or variable definition:

>> x = 100 >>> def x(): ... return "Hello!" >>> print(x*2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for *: 'function' and 'int'


> def x(): ... return "Hello!" >>> x = 100 >>> print(x()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not callable

In both of these cases, the programmer assumed that setting a variable and defining a function wouldn’t interfere with one another, with clearly disastrous results.

The fact that “def” assigns to a variable also explains why you cannot define a function multiple times in Python, each with its own function signature (i.e., argument count and types). Some people, for example, expect to be able to do the following:

>> def hello(name): ... return f"Hello, {name}" >>> def hello(first, last): ... return f"Hello, {first} {last}"

They then assume that we can invoke “hello” with either one argument (and invoke the first version) or two arguments (and invoke the second).  However, this isn’t true; the fact that “def” defines a variable means that the second “def” obliterates whatever you did with the first.  Consider this code, in which I define “x” twice in a row with two different values:

>>> x = 100 >>> x = 200

You cannot realistically think that Python will know which “x” to choose according to context, right?  In the same way, Python cannot choose a function definition for you.  It looks up the function’s name in “globals()”, gets the function object from that dictionary, and then invokes it (thanks to the parentheses).

A final aspect of seeing “def” as variable assignment has to do with inner functions — that is, functions defined within other functions.  For example:

>> def foo(): ... def bar(): ... print("I'm in bar!") ... return bar

What’s happening here?  Well, let’s assume that we execute function “foo”.    Now we’re inside of a function, which means that every time we define a variable, it’ll be a local variable, rather than a global one. The next thing we hit is “def”, which defines a variable.  Well, that means “bar” is a local variable inside of the “foo” function.

Then, in the next line of “foo”, after defining “bar”, we return a local variable to whoever called “foo”.  We can return any kind of local variable we like from a function; in this particular case, though, we’re returning a function object.

This just scratches the surface of inner functions, but the logic — we’re defining and then returning a local variable in “foo” — is consistent, and should make more sense if you think of “def” as just assigning variables.

Method 3: import

One of the most powerful aspects of Python is its very mature standard library. When you download and install Python, you have hundreds (thousands?) of modules available to you and your programs, just by using the “import” statement.

Well, guess what? “import” is defining a variable, too. Indeed, it often helps to think about “import” as defining a single variable, namely the one whose name you give when you invoke it.  For example:

>> import os

When I invoke the above, I am doing three things:

  • Python finds the module file os.py (or some variation thereof) and executes it
  • Python defines the variable “os” in the current scope
  • All of the global variables defined in the module are turned into attributes on the “os” variable.

It’s pretty easy to see that “import” defines a variable:

>> type(os) <class 'module'>

What’s a bit harder to understand is the idea that the global variables defined inside of our module all become attributes on the module object. For example, in “os.py”, there is a top-level definition of the “walk” function. Inside of the module, it’s a global variable (function).  But to whoever imports “os”, “walk” isn’t a variable.  Rather, it’s an attribute, “os.walk”.  That’s why if we want to invoke it, we need to use the complete name, “os.walk”.

That’s fine, and works pretty well overall. But if you’re going to be using “os.walk” a lot, then you might not want the overhead of saying “os.walk” each and every time. Instead, you might want to create a global variable whose value is the same as “os.walk”.  You might even say something like this:

walk = os.walk

Since we’re assigning a variable, and since we’re not inside of a function, we’re creating a global variable here.  And since “os.walk” is already defined, we’re simply adding a new reference (name) to “os.walk”.

A faster, easier, and more Pythonic way to do this is with:

from os import walk

Although to be honest, this isn’t quite the same as what I did before. That’s because “from os import walk” does find the “os.py” module file, and does execute its contents — but it doesn’t define “os” as a variable in our global namespace.  Rather, it only creates “walk” as a variable, pointing to the value of what the module knows as “os.walk”.

Does that mean that “from … import …” doesn’t actually load the module? That would be pretty silly, in that future imports of “os” would then be less effective and efficient.

Python is actually pretty clever in this case: When it executes the module file, it creates the module object in sys.modules — a dictionary whose keys are the names of the modules we have loaded.  That’s how “import” knows to import a file only once; it can run “in” on sys.modules, check to see if a module has already been loaded, and only actually import it if the module is missing from that dict.

What if the module has been loaded?  Then “import” still defines the variable in the current namespace.  Thus, if your project has “import os” in 10 different files, then only the first invocation of “import os” actually loads “os.py”.  The nine following times, you just get a variable definition that points to the module object.

In the case of “from-import”, the same thing happens, except that instead of assigning the module’s name as a variable in the current namespace, you get the name(s) you explicitly asked for.

In theory, you can use “import” and “from-import” inside of a function, in which case you’ll define a local variable.  I’m sure that there is some use for doing so, but I can’t think of what it would be.  However, here Python is also remarkably consistent, allowing you to do things that wouldn’t necessarily be useful.

In Python 2.7, there were two different implementations of the “pickle” module, one written in Python (“pickle”) and one written in C (“cPickle”).  The latter executed much faster, but wasn’t available on all platforms.  It was often recommended that programmers do this, to get the best possible version:

try: import cPickle as pickle except ImportError: import pickle >>> pickle <module 'cPickle' from '/usr/local/Cellar/python@2/2.7.14_3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/cPickle.so'>

This technique (which isn’t necessary with Python 3) only worked because “import” is executed at runtime, along with other Python code, and because “import” defines a variable.  What the above code basically says is, “Try to load cPickle but if you succeed, use the name pickle.   If you can’t do that, then just use the regular ol’ pickle library.”

Method 4: class

The fourth and final way to define a variable in Python is with the “class” statement, which we use to create (big surprise) classes.  We can create a class as follows:

class Foo(object): def __init__(self, x): self.x = x

Newcomers to Python often think of classes as blueprints, or plans, or descriptions, of the objects that will be created. But they aren’t that at all — classes in Python are objects, just like anything else.  And since they’re objects, they have both a type (which we can see with the “type” function) and attributes (which we can see with the “dir” function):

>> type(Foo) <class 'type'> >>> dir(Foo) ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__']

The “class” keyword defines a new variable — Foo, in this case — which is assigned to a class object. A class object is just like everything else, except that it is callable, meaning that we can execute it (with parentheses) and get a new object back:

>> Foo(10) <__main__.Foo object at 0x10da5fc18>

Defining a class means that we’re defining a variable — which means that if we have a previous class of the same name, we’re going to overwrite it, just as was the case with functions.

Class objects, like nearly all objects in Python, can have attributes added to them at runtime.  We can thus say:

>> Foo.abc = 100

And now I’ve added the “abc” attribute to the “Foo” object, which is a class.  I can also do that inside of a class definition:

>>> class Foo(object): ... abc = 100 ... def __init__(self, x): ... self.x = x >>> Foo.abc 100

How is this possible?  Haven’t we, in our class definition, created a variable “abc”?  Nope — it looks like a variable, but it’s actually an attribute on the “Foo” class.  And it must be an attribute, not only because we can retrieve it with “Foo.abc” later, but because all assignments inside of a class definition aren’t creating variables, but rather attributes.  Inside of “class Foo”, we’re thus creating two attributes on the class — not only “abc”, but also “__init__”, the method.

Does this seem familiar, the idea global variables we define in one context are seen as attributes in another context?  Yes, that’s right — we saw the same thing with modules.  Global variables defined in a module are seen, once the module is imported, as attributes on that module.

You can thus think of classes, in some ways, as modules that don’t require a separate file. There are other issues as well, such as the difference between methods and functions — but in general, understanding that whatever you do inside of a class definition is treated as a  module can help to improve your understanding.

I don’t do this very often, but what does it mean, then, if I define a class within another class?  In some languages, such “inner classes” are private, and only available for use within the outer class. Not so in Python; since “class” defines a variable and any variable assignments within a class actually create attributes, an inner class is available to us via the outer class’s attribute:

>>> class Foo(object): ... def __init__(self, x): ... self.x = x ... class Bar(object): ... def __init__(self, y): ... self.y = y ... >>> b = Foo.Bar(20) >>> b.y 20

Variables in Python aren’t just for storing data; they store our functions, modules, and classes, as well.  Understanding how various keywords define and update these variables can really help to understand what Python code is doing — and also provides for some cool tricks you can use in your own code.

The post Four ways to assign variables in Python appeared first on Lerner Consulting Blog.

Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: Weekly report #151

Planet Debian - Tue, 2018-03-20 15:59

Here's what happened in the Reproducible Builds effort between Sunday March 11 and Saturday March 17 2018:

Upcoming events Patches submitted Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (168)
  • Emmanuel Bourg (2)
  • Pirate Praveen (1)
  • Tiago Stürmer Daitx (1)

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Categories: FLOSS Project Planets

Cutelyst 2 released with HTTP/2 support

Planet KDE - Tue, 2018-03-20 15:51

Cutelyst the Qt/C++ web framework just got a major release update, around one and half year ago Cutelyst v1 got the first release with a stable API/ABI, many improvements where made during this period but now it was time to clean up the mistakes and give room for new features.

Porting applications to v2 is mostly a breeze, since most API changes were done on the Engine class replacing 1 with 2 and recompiling should be enough on most cases, at least this was the case for CMlyst, Stickyst and my personal applications.

Due cleanup Cutelyst Core module got a size reduction, and WSGI module increased a bit due the new HTTP/2 parser. Windows MSVC was finally able to build and test all modules.

WSGI module now defaults to using our custom EPoll event loop (can be switched back to Qt’s default one with an environment variable), this allows for a steady performance without degradation when an increased number of simultaneous connections is made.

Validators plugins by Matthias got their share of improvements and a new password quality validator was added, plus manual pages for the tools.

The HTTP/2 parser adds more value to our framework, it’s binary nature makes it very easy to implement, in two days most of it was already working but HTTP/2 comes with a dependency, called HPACK which has it’s own RFC. HPACK is the header compression mechanism created for HTTP/2 because gzip compression as used in SPDY had security issues when on HTTPS called CRIME .

The problem is that HPACK is not very trivial to implement and it took many hours and made KCalc my best friend when converting hex to binary to decimal and what not…

Cutelyst HTTP/2 parser passes all tests of a tool named h2spec, using h2load it even showed more requests per second than HTTP/1 but it’s complicated to benchmark this two different protocols specially with different load tools.

Upgrading from HTTP/1.1 is supported with a switch, as well as enabling H2 on HTTPS using the ALPN negotiation (which is the only option browsers support), H2C or HTTP/2 in clear text is also supported but it’s only useful if the client can connect with previous knowledge.

If you know HTTP/2 your question is: “Does it support server push?”. No it doesn’t at the moment, SERVER_PUSH is a feature that allows the server to send CSS, Javascript without the browser asking for it, so it can avoid the request the browser would do, however this feature isn’t magical, it won’t make slow websites super fast , it’s also hard to do right, and each browser has it’s own complicated issues with this feature.

I strongly recommend reading this https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/ .

This does not mean SERVER_PUSH won’t be implemented, quite the opposite, due the need to implement it properly I want more time to study the RFC and browsers behavior so that I can provide a good API.

I have also done some last minute performance improvements with the help of KDAB Hotspot/perf, and I must say that the days of profiling with weird/huge perf command line options are gone, awesome tool!

Get it! https://github.com/cutelyst/cutelyst/archive/v2.0.0.tar.gz

If you like it please give us a star on GitHub!

Have fun!

Categories: FLOSS Project Planets

Nikola: Nikola v7.8.13 is out! (maintenance release)

Planet Python - Tue, 2018-03-20 13:50

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v7.8.13. This is a maintenance release for the v7 series.

Future releases in the v7 series are going to be small maintenance releases that include bugfixes only, as work on v8.0.0 is underway.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/


Install using pip install Nikola or download tarballs on GitHub and PyPI.

  • Add new Thai translation by Narumol Hankrotha and Jean Jordaan (v8 backport)
  • Hide “Incomplete language” message for overrides of complete languages
  • Restore ability to override messages partially

(Note: for a while, this post said v7.8.14 was released. We apologise for the confusion.)

Categories: FLOSS Project Planets

Import Python: #167: Detecting Resonant Frequency, Intro to Blockchain, EOL of Python 2.7 and more

Planet Python - Tue, 2018-03-20 13:35
Worthy Read
Running End-to-End Tests on Kubernetes Integrating the build and deployment of up to 40 websites across the world is challenging. This blog talks about how one team solved real world CI/CD problems using Kubernetes and GoCD.
kubernetes, advert
Breaking a Wine Glass in Python By Detecting the Resonant Frequency In today’s post, I walk through the journey of writing a Python program to break wine glasses on demand, by detecting their resonant frequency. Along the way we’ll 3D print a cone, learn about resonant frequencies, and see why I needed an amplifier and compression driver. So, let’s get started.
project, sound engineering
A Practical Introduction to Blockchain with Python. Blockchain is arguably one of the most significant and disruptive technologies that came into existence since the inception of the Internet. It's the core technology behind Bitcoin and other crypto-currencies that drew a lot of attention in the last few years. As its core, a blockchain is a distributed database that allows direct transactions between two parties without the need of a central authority. This simple yet powerful concept has great implications for various institutions such as banks, governments and marketplaces, just to name a few. Any business or organization that relies on a centralized database as a core competitive advantage can potentially be disrupted by blockchain technology.
Jan 1st 2020 is End of Life for Python 2.7 Curator's note - Lot of banks and financial companies are not going to upgrade and be happy to pay vendors for security updates.
How to list the most common words from text corpus using Scikit-Learn? Frequently we want to know which words are the most common from a text corpus sinse we are looking for some patterns.
machine learning, scikit
4 Ways to Improve Your DevOps Testing - Free eBook Read the 4-part eBook to learn how to detect problems earlier in your DevOps testing processes by Proactively responding to your monitoring software, Integrating your security reqs in your initial development, Replicating real-world conditions to find unexpected variables, Performing continuous testing to uncover points of failure.
advert, devops
How I implemented iPhone X’s FaceID using Deep Learning in Python. Reverse engineering iPhone X’s new unlocking mechanism.
Memory efficiency of parallel IO operations in Python Python allows for several different approaches to parallel processing. The main issue with parallelism is knowing its limitations. We either want to parallelise IO operations or CPU-bound tasks like image processing. The first use case is something we focused on in the recent Python Weekend* and this article provides a summary of what we came up with.
parallel processing
Python 3.7’s new builtin breakpoint — a quick tour Debugging in Python has always felt a bit “awkward” compared with other languages I’ve worked in. Introducing breakpont()
Python Programming Exercises Book It's free.
Markdown Descriptions on PyPI - Dustin Ingram I’m really excited to say that as of today, PyPI supports rendering project descriptions from Markdown! This has been a oft-requested feature and after lots of work (including the creation of PEP 566) it is now possible, without translating Markdown to rST or any other hacks!
python-itertools itertools.accumulate(iterable[, func])
Agile database integration tests with Python, SQLAlchemy and Factory Boy So you are interested in testing, aren’t you? Not doing it yet? That’s the right time to start then! In this little example, I’m going to show a possible procedure to easily test your piece of code that interacts with a database.
Deploy TensorFlow models – Towards Data Science Super fast and concise tutorial
Stack Overflow Developer Survey 2018 - See how Python is doing. This year, over 100,000 developers told us how they learn, build their careers, which tools they’re using, and what they want in a job.

Senior Python Developer at causaLens Remote (Europe) We are looking for a motivated and high-achieving Senior Python Developer based anywhere in Europe to join the team working on an exciting new Big Data/Machine Learning platform. This is a full time placement with significant opportunities for growth and advancement as one of the first employees of the company.

Senior Python Developer (Crawling Engineer) - Remote Contractor at YMN LTD. Turkey 4+ years of software development experience. -Scrapy experience is a big plus.

black - 958 Stars, 15 Fork The uncompromising Python code formatter.
makesite - 216 Stars, 16 Fork Simple, lightweight, and magic-free static site/blog generator for Python coders
thug-memes - 115 Stars, 4 Fork Command line Thug Meme generator written in Python.
requests-core - 79 Stars, 3 Fork Experimental lower-level async HTTP client for Requests 3.0
white - 78 Stars, 1 Fork The Black code formatter, but brighter (PEP8–inspired).
socialsentiment - 40 Stars, 2 Fork Sentiment Analysis application created with Python and Dash, hosted at socialsentiment.net.
rose - 12 Stars, 0 Fork Analyse all kinds of data for a TV series.
onegram - 5 Stars, 0 Fork A simplistic api-like instagram bot powered by requests.
convert-outlook-msg-file - 5 Stars, 0 Fork Python library to convert Microsoft Outlook .msg files to .eml/MIME message files.
Siamese-LSTM - 4 Stars, 1 Fork Siamese LSTM for evaluating semantic similarity between sentences of the Quora Question Pairs Dataset.
MusicTag - 3 Stars, 0 Fork MusicTag allows you to download from YouTube all the music you want and automatically set the ID3 tags.
Categories: FLOSS Project Planets

Manifesto: Write once, chat anywhere: the easy way to create chatbots with Drupal

Planet Drupal - Tue, 2018-03-20 13:08
Historically, we have many ways of serving content digitally: through a website, through mobile apps, via social media, RSS feeds, RESTful APIs (allowing content to be consumed by other apps, websites etc), and email. Now we have a new player in the game: chatbots and personal assistants. Conversational interfaces promise a more natural way for. Continue reading...
Categories: FLOSS Project Planets

Dries Buytaert: Responsive accessible HTML tables

Planet Drupal - Tue, 2018-03-20 12:56

A very thorough explanation of how to build responsive accessible HTML tables. I'd love to compare this with Drupal's out-of-the-box approach to evaluate how we are doing.

Categories: FLOSS Project Planets