FLOSS Project Planets

Kushal Das: How to delete your Facebook account?

Planet Python - Wed, 2018-03-21 00:49

I was planning to delete my Facebook account for some time, but, never took the actual steps to do it. The recent news on how the companies are using data from Facebook made me take that next step. And I know Snowden is talking about these issues for a long time (feel free to read a recent interview), I should have done that before. I was just lazy.

First download all the current information for archive

Login to Facebook, go to your settings page. Then you can see a link saying Download a copy of your Facebook data. Click on that. It will ask your password, and then take some time to generate an archive. You can download it after some time.

Let us ask Facebook to delete the account

Warning: Once you deleted your account, you can not get back your data. So, do the next steps after think clearly (personally, I can say it is a good first step to slowly gain back privacy).

Go to this link to see the following screen.

If you click on the blue Delete my account, it will open the next screen, where it will ask you to confirm your password, and also fill in the captcha text.

After this, you will see the final screen. It will take around 90 days to delete all of your information.

Remember to use long passphrases everywhere

Now, you have deleted your account. But, remember that it is just one single step to have privacy. There various other things you can do. I think the next step should be about all of your passwords. Read this blog post about how to generate long passphrases, and use those instead of short passwords. You should also use a proper password manager to save all of these passwords.


Categories: FLOSS Project Planets

Iustin Pop: Hakyll basics

Planet Debian - Tue, 2018-03-20 22:00

As part of my migration to Hakyll, I had to spend quite a bit time understanding how it works before I became somewhat “at-home” with it. There are many posts that show “how to do x”, but not so many that explain its inner workings. Let me try to fix that: at its core, Hakyll is nothing else than a combination of make and m4 all in one. Simple, right? Let’s see :)

Note: in the following, basic proficiency with Haskell is assumed.

Monads and data types Rules

The first area (the make equivalent), more precisely the Rules monad, concerns itself with the rules for mapping source files into output files, or creating output files from scratch.

Key to this mapping is the concept of an Identifier, which is name in an abstract namespace. Most of the time—e.g. for all the examples in the upstream Hakyll tutorial—this identifier actually maps to a real source file, but this is not required; you can create an identifier from any string value.

The similarity, or relation, to file paths manifests in two ways:

  • the Identifier data type, although opaque, is internally implemented as a simple data type consisting of a file path and a “version”; the file path here points to the source file (if any), while the version is rather a variant of the item (not a numeric version!).
  • if the identifier has been included in a rule, it will have an output file (in the Compiler monad, via getRoute).

In effect, the Rules monad is all about taking source files (as identifiers) or creating them from scratch, and mapping them to output locations, while also declaring how to transform—or create—the contents of the source into the output (more on this later). Anyone can create an identifier value via fromFilePath, but “registering” them into the rules monad is done by one of:

Note: I’m probably misusing the term “registered” here. It’s not the specific value that is registered, but the identifier’s file path. Once this string value has been registered, one can use a different identifier value with a similar string (value) in various function calls.

Note: whether we use match or create doesn’t matter; only the actual values matter. So a match "foo.bar" is equivalent to create ["foo.bar"], match here takes the list of identifiers from the file-system, but does not associated them to the files themselves—it’s just a way to get the list of strings.

The second argument to the match/create calls is another rules monad, in which we’re processing the identifiers and tell how to transform them.

This transformation has, as described, two aspects: how to map the file path to an output path, via the Rules data type, and how to compile the body, in the Compiler monad.

Name mapping

The name mapping starts with the route call, which lifts the routes into the rules monad.

The routing has the usual expected functionality:

  • idRoute :: Routes, which maps 1:1 the input file name to the output one.
  • setExtension :: String -> Routes, which changes the extension of the filename, or sets it (if there wasn’t any).
  • constRoute :: FilePath -> Routes, which is special in that it will result in the same output filename, which is obviously useful only for rules matching a single identifier.
  • and a few more options, like building the route based on the identifier (customRoute), building it based on metadata associated to the identifier (metadataRoute), composing routes, match-and-replace, etc.

All in all, routes offer all the needed functionality for mapping.

Note that how we declare the input identifier and how we compute the output route is irrelevant, what matters is the actual values. So for an identifier with name (file path) foo.bar, route idRoute is equivalent to constRoute "foo.bar".


Slightly into more interesting territory here, as we’re moving beyond just file paths :) Lifting a compiler into the routes monad is done via the compile function:

compile :: (Binary a, Typeable a, Writable a) => Compiler (Item a) -> Rules ()

The Compiler monad result is an Item a which is just and identifier with a body (of type a). This type variable a means we can return any Writable item. Many of the compiler functions work with/return String, but the flexibility to use other types is there.

The functionality in this module revolves around four topics:

The current identifier

First the very straightforward functions for the identifier itself:

  • getUnderlying :: Compiler Identifier, just returns the identifier
  • getUnderlyingExtension :: Compiler String, returns the extension

And the for the body (data) of the identifier (mostly copied from the haddock of the module):

  • getResourceBody :: Compiler (Item String): returns the full contents of the matched source file as a string, but without metadata preamble, if there was one.
  • getResourceString :: Compiler (Item String), returns the full contents of the matched source file as a string.
  • getResourceLBS :: Compiler (Item ByteString), equivalent to the above but as lazy bytestring.
  • getResourceFilePath :: Compiler FilePath, returns the file path of the resource we are compiling.

More or less, these return the data to enable doing arbitrary things to it, and are at the cornerstone of a static site compiler. One could implement a simple “copy” compiler by doing just:

match "*.html" $ do -- route to the same path, per earlier explanation. route idRoute -- the compiler just returns the body of the source file. compile getResourceLBS

All the other functions in the module work on arbitrary identifiers.


I’m used to Yesod and its safe routes functionality. Hakyll has something slightly weaker, but with programmer discipline can allow similar levels of I know this will point to the right thing (and maybe correct escaping as well). Enter the:

getRoute :: Identifier -> Compiler (Maybe FilePath)

function which I alluded to earlier, and which—either for the current identifier or another identifier—returns the destination file path, which is useful for composing links (as in HTML links) to it.

For example, instead of hard-coding the path to the archive page, as /archive.html, one can instead do the following:

let archiveId = "archive.html" create [archiveId] $ do -- build here the archive page -- later in the index page create "index.html" $ do compile $ do -- compute the actual url: archiveUrl <- toUrl <$> getRoute archiveId -- then use it in the creation of the index.html page

The reuse of archiveId above ensures that if the actual path to the archive page changes (renames, site reorganisation, etc.), then all the links to it (assuming, again, discipline of not hard-coding them) are automatically pointing to the right place.

Working with other identifiers

Getting to the interesting aspect now. In the compiler monad, one can ask for any other identifier, whether it was already loaded/compiled or not—the monad takes care of tracking dependencies/compiling automatically/etc.

There are two main functions:

  • load :: (Binary a, Typeable a) => Identifier -> Compiler (Item a), which returns a single item, and
  • loadAll :: (Binary a, Typeable a) => Pattern -> Compiler [Item a], which return a list of items, based on the same patterns used in the rules monad.

If the identifier/pattern requested do not match actual identifiers declared in the “parent” rules monad, then these calls will fail (as in monadic fail).

The use of other identifiers in a compiler step is what allows moving beyond “input file to output file”; aggregating a list of pages (e.g. blog posts) into a single archive page is the most obvious example.

But sometimes getting just the final result of the compilation step (of other identifiers) is not flexible enough—in case of HTML output, this includes the entire page, including the <html><head>…</head> part, not only the body we might be interested in. So, to ease any aggregation, one uses snapshots.


Snapshots allow, well, snapshotting the intermediate result under a specific name, to allow later retrieval:

  • saveSnapshot :: (Binary a, Typeable a) => Snapshot -> Item a -> Compiler (Item a), to save a snapshot
  • loadSnapshot :: (Binary a, Typeable a) => Identifier -> Snapshot -> Compiler (Item a), to load a snapshot, similar to load
  • loadAllSnapshots :: (Binary a, Typeable a) => Pattern -> Snapshot -> Compiler [Item a], similar to loadAll

One can save an arbitrary number of snapshots at various steps of the compilation, and then re-use them.

Note: load and loadAll are actually just the snapshot variant, with a hard-coded value for the snapshot. As I write this, the value is "_final", so probably it’s best not to use the underscore prefix for one’s own snapshots. A bit of a shame that this is not done better, type-wise.

What next?

We have rules to transform things, including smart name transforming, we have compiler functionality to transform the data. But everything mentioned until now is very generic, fundamental functionality, bare-bones to the bone (ha!).

With just this functionality, you have everything needed to build an actual site. But starting at this level would be too tedious even for hard-core fans of DIY, so Hakyll comes with some built-in extra functionality.

And that will be the next post in the series. This one is too long already :)

Categories: FLOSS Project Planets

Wingware News: Wing Python IDE 6.0.11: March 21, 2018

Planet Python - Tue, 2018-03-20 21:00
This release implements auto-save and restore for remote files, adds a Russian translation of the UI (thanks to Alexandr Dragukin), improves remote development error reporting and recovery after network breaks, correctly terminates SSH tunnels when switching projects or quitting, fixes severe network slowdown seen on High Sierra, auto-reactivates expired annual licenses without restarting Wing, and makes about 20 other improvements.
Categories: FLOSS Project Planets

Matthew Rocklin: Dask Release 0.17.2

Planet Python - Tue, 2018-03-20 20:00

This work is supported by Anaconda Inc. and the Data Driven Discovery Initiative from the Moore Foundation.

I’m pleased to announce the release of Dask version 0.17.2. This is a minor release with new features and stability improvements. This blogpost outlines notable changes since the 0.17.0 release on February 12th.

You can conda install Dask:

conda install dask

or pip install from PyPI:

pip install dask[complete] --upgrade

Full changelogs are available here:

Some notable changes follow:

Tornado 5.0

Tornado is a popular framework for concurrent network programming that Dask relies on heavily. Tornado recently released a major version update that included both some major features for Dask as well as a couple of bugs.

The new IOStream.read_into method allows Dask communications (or anyone using this API) to move large datasets more efficiently over the network with fewer copies. This enables Dask to take advantage of high performance networking available on modern super-computers. On the Cheyenne system, where we tested this, we were able to get the full 3GB/s bandwidth available through the Infiniband network with this change (when using a few worker processes).

Many thanks to Antoine Pitrou and Ben Darnell for their efforts on this.

At the same time there were some unforeseen issues in the update to Tornado 5.0. More pervasive use of bytearrays over bytes caused issues with compression libraries like Snappy and Python 2 that were not expecting these types. There is a brief window in distributed.__version__ == 1.21.3 that enables this functionality if Tornado 5.0 is present but will misbehave if Snappy is also present.

HTTP File System

Dask leverages a file-system-like protocol for access to remote data. This is what makes commands like the following work:

import dask.dataframe as dd df = dd.read_parquet('s3://...') df = dd.read_parquet('hdfs://...') df = dd.read_parquet('gcs://...')

We have now added http and https file systems for reading data directly from web servers. These also support random access if the web server supports range queries.

df = dd.read_parquet('https://...')

As with S3, HDFS, GCS, … you can also use these tools outside of Dask development. Here we read the first twenty bytes of the Pandas license:

from dask.bytes.http import HTTPFileSystem http = HTTPFileSystem() with http.open('https://raw.githubusercontent.com/pandas-dev/pandas/master/LICENSE') as f: print(f.read(20)) b'BSD 3-Clause License'

Thanks to Martin Durant who did this work and manages Dask’s byte handling generally. See remote data documentation for more information.

Fixed a correctness bug in Dask dataframe’s shuffle

We identified and resolved a correctness bug in dask.dataframe’s shuffle that resulted in some rows being dropped during complex operations like joins and groupby-applies with many partitions.

See dask/dask #3201 for more information.

Cluster super-class and intelligent adaptive deployments

There are many Python subprojects that help you deploy Dask on different cluster resource managers like Yarn, SGE, Kubernetes, PBS, and more. These have all converged to have more-or-less the same API that we have now combined into a consistent interface that downstream projects can inherit from in distributed.deploy.Cluster.

Now that we have a consistent interface we have started to invest more in improving the interface and intelligence of these systems as a group. This includes both pleasant IPython widgets like the following:

as well as improved logic around adaptive deployments. Adaptive deployments allow clusters to scale themselves automatically based on current workload. If you have recently submitted a lot of work the scheduler will estimate its duration and ask for an appropriate number of workers to finish the computation quickly. When the computation has finished the scheduler will release the workers back to the system to free up resources.

The logic here has improved substantially including the following:

  • You can specify minimum and maximum limits on your adaptivity
  • The scheduler estimates computation duration and asks for workers appropriately
  • There is some additional delay in giving back workers to avoid hysteresis, or cases where we repeatedly ask for and return workers
Related projects

Some news from related projects:

  • The young daskernetes project was renamed to dask-kubernetes. This displaces a previous project (that had not been released) for launching Dask on Google Cloud Platform. That project has been renamed to dask-gke.
  • A new project, dask-jobqueue was started to handle launching Dask clusters on traditional batch queuing systems like PBS, SLURM, SGE, TORQUE, etc.. This projet grew out of the Pangeo collaboration
  • A Dask Helm chart has been added to Helm’s stable channel

The following people contributed to the dask/dask repository since the 0.17.0 release on February 12h:

  • Anderson Banihirwe
  • Dan Collins
  • Dieter Weber
  • Gabriele Lanaro
  • John Kirkham
  • James Bourbeau
  • Julien Lhermitte
  • Matthew Rocklin
  • Martin Durant
  • Max Epstein
  • nkhadka
  • okkez
  • Pangeran Bottor
  • Rich Postelnik
  • Scott M. Edenbaum
  • Simon Perkins
  • Thrasibule
  • Tom Augspurger
  • Tor E Hagemann
  • Uwe L. Korn
  • Wes Roach

The following people contributed to the dask/distributed repository since the 1.21.0 release on February 12th:

  • Alexander Ford
  • Andy Jones
  • Antoine Pitrou
  • Brett Naul
  • Joe Hamman
  • John Kirkham
  • Loïc Estève
  • Matthew Rocklin
  • Matti Lyra
  • Sven Kreiss
  • Thrasibule
  • Tom Augspurger
Categories: FLOSS Project Planets

Justin Mason: Links for 2018-03-20

Planet Apache - Tue, 2018-03-20 19:58
  • SXSW 2018: A Look Back at the 1960s PLATO Computing System – IEEE Spectrum

    Author Brian Dear on how these terminals were designed for coursework, but students preferred to chat and play games […] “Out of the top 10 programs on PLATO running any day, most were games,” Dear says. “They used more CPU time than anything else.” In one popular game called Empire, players blast each other’s spaceships with phasers and torpedoes in order to take over planets. And PLATO had code review built into the OS: Another helpful feature that no longer exists was called Term Comment. It allowed users to leave feedback for developers and programmers at any place within a program where they spotted a typo or had trouble completing a task. To do this, the user would simply open a comment box and leave a note right there on the screen. Term Comment would append the comment to the user’s place in the program so that the recipient could easily navigate to it and clearly see the problem, instead of trying to recreate it from scratch on their own system. “That was immensely useful for developers,” Dear says. “If you were doing QA on software, you could quickly comment, and it would track exactly where the user left this comment. We never really got this on the Web, and it’s such a shame that we didn’t.”

    (tags: plato computing history chat empire gaming code-review coding brian-dear)

Categories: FLOSS Project Planets

Steinar H. Gunderson: Debian CEF packages

Planet Debian - Tue, 2018-03-20 19:38

I've created some Debian CEF packages—CEF isn't the easiest thing to package (and it takes an hour to build even on my 20-core server, since it needs to build basically all of Chromium), but it's fairly rewarding to see everything fall into place. It should benefit not only Nageru, but also OBS and potentially CasparCG if anyone wants to package that.

It's not in the NEW queue because it depends on a patch to chromium that I hope the Chromium maintainers are brave enough to include. :-)

Categories: FLOSS Project Planets

FSF Blogs: Friday Free Software Directory IRC meetup time: March 23rd starting at 12:00 p.m. EDT/16:00 UTC

GNU Planet! - Tue, 2018-03-20 18:15

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

This weekend, LibrePlanet 2018 converges on Cambridge, MA. One fantastic feature of LibrePlanet is the accessibility and potential for remote attendance because of the livestream of events. Livestreaming concurrent events is no easy feat when considering the moving pieces; all this momentum pushes development from ABYSS to HumpBack Anglerfish. This week, while we work on adding new programs, we can also look back at our favorite LibrePlanet speeches from previous years.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

Categories: FLOSS Project Planets

Friday Free Software Directory IRC meetup time: March 23rd starting at 12:00 p.m. EDT/16:00 UTC

FSF Blogs - Tue, 2018-03-20 18:15

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

This weekend, LibrePlanet 2018 converges on Cambridge, MA. One fantastic feature of LibrePlanet is the accessibility and potential for remote attendance because of the livestream of events. Livestreaming concurrent events is no easy feat when considering the moving pieces; all this momentum pushes development from ABYSS to HumpBack Anglerfish. This week, while we work on adding new programs, we can also look back at our favorite LibrePlanet speeches from previous years.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

Categories: FLOSS Project Planets

Speed Matters: fastthreadpool

Planet Python - Tue, 2018-03-20 18:00

Existing implementations of thread pools have a relatively high overhead in certain situations. Especially apply_async in multiprocessing.pool.ThreadPool and submit in concurrent.futures. ThreadPoolExecutor at all (see the benchmarks in the docs directory).

In case of ThreadPoolExecutor don’t use wait. It can be extremely slow! If you’ve only a small number of jobs and the jobs have a relatively long processing time, then these overheads don’t count. But in case of high number of jobs with short processing time the overhead of the above implementations will noticeably slow down the processing speed. The fastthreadpool module solves this issue, because it has a very small overhead in all situations.

Although fastthreadpool is lightweight it has some additional cool features like methods for later scheduling, repeating events and generator functions for worker callback functions.

In addition to get the best performance I’ve also written a fast and lightweight semaphore which is more than 20 times faster than the one which comes with the Python installation.

Some reasons why fastthreadpool is so fast:

  • Avoid locks as much as possible
  • Use deque instead of Queue
  • Do not create a class instance for every work item

The first test in benchmarks.py has a minimum worker callback function which just returns the given parameter. The main thread calculates the sum of the returned values. This is the most extreme case where the overhead of the thread pool implementation counts as much as possible. This is not the typical use case but it shows the overhead of the different thread pool implementations very well.

The results show that map in ThreadPool performs good, but map in fastthreadpool is a bit more efficient.

But apply_async has a very bad performance, even worse than submit in ThreadPoolExecutor. submit in fastthreadpool performs very well.

Thread Pool Function Time single threaded for loop 0.378 fastthreadpool map 0.166 ThreadPool map_async 0.280 ThreadPoolExecutor map 53.072 fastthreadpool submit 2.679 ThreadPool apply_async 76.350 ThreadPoolExecutor submit 59.161

A more typical case shows the last example where the worker threads serialize and compress data.

Thread Pool Function Time single threaded for loop 0.628 fastthreadpool map 0.598 ThreadPool map_async 0.609 ThreadPoolExecutor map 1.192 fastthreadpool submit 0.659 ThreadPool apply_async 1.317 ThreadPoolExecutor submit 1.169

As you can see the worker threads are still 2 times faster with fastthreadpool when submitting single jobs to the pool than the other 2 thread pool implementations.

Again this example shows clearly if speed matter then avoid concurrent.futures. Although it has a nice interface it is really slow.

For examples how to use fastthreadpool please have a look at the examples directory.

Check out the fastthreadpool module on github, licensed under the MIT license.

Categories: FLOSS Project Planets

Reuven Lerner: Four ways to assign variables in Python

Planet Python - Tue, 2018-03-20 16:02

Within minutes of starting to learn Python, everyone learns how to define a variable. You can say:

x = 100

and voila!  Your You have created a variable “x”, and assigned the integer value 100 to it.  It couldn’t be simpler than that.

But guess what?  This isn’t the only way to define variables, let alone assign to them, in Python. And by understanding the other ways that we can define variables, and the nuances of the “simple” variable assignment that I described above, you can get a better appreciation for Python’s consistency, along with the idea that “everything is an object.”

Method 1: Plain ol’ assignment

The first way to define a variable is with the assignment operator, =.  Once again, I can say

x = 100

and I have assigned to x. If x didn’t exist before, then it does now. If it did exist before, then x now points to a new and different object.

What if the new, different object is also of a new and different type? We don’t really care; the nature of a dynamic language is such that any variable can point to an object of any type. When we say “type(x)” in our code, we’re not really asking what type of object the variable “x” can hold; rather, we’re asking what type of variable “x” is pointing to right now.

One of the major misconceptions that I come across in my Python courses and newsletter is just how Python allocates and stores data.  Many of my students come from a C or C++ background, and are thus used to the “box model” of variable assignment: When we declare the variable “x”, we have to declare it with a type.  The language then allocates enough memory to store that type, and gives that memory storage an alias, “x”.  When I say “x=100”, the language puts the value 100 inside of the box named “x”.  If the box isn’t big enough, bad news!

In Python, though, we don’t use the box model of variable assignment. Rather, we have the dictionary model of variable assignment: When we assign “x=100”, we’re creating (or updating) a key-value pair in a dictionary. The key’s name is “x”, and the value is 100, or anything else at all. Just as we don’t care what type of data is stored in a dictionary’s values, we also don’t care what type of value is stored inside of a Python variable’s value.

(I know, it’s a bit mind-bending to say that Python variables are stored in a dictionary, when dictionaries are themselves Python values, and are stored in Python variables. It’s turtles all the way down, as they say.)

Don’t believe me?  Just run the “globals” function in Python. You’ll get a dictionary back, in which the keys are all of the global variables you’ve defined, and the values are all of the values of those variables.  For example:

>>> x = 100 >>> y = [10, 20, 30] >>> globals() {'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <class '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {}, '__builtins__': <module 'builtins' (built-in)>, 'x': 100, 'y': [10, 20, 30]} >>> x = 200 >>> y = {100, 200, 300} >>> globals() {'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <class '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {}, '__builtins__': <module 'builtins' (built-in)>, 'x': 200, 'y': {200, 100, 300}}

But wait, we can do even better:

>>> globals()['x'] = 300 >>> x 300 >>> globals()['y'] = {'a':1, 'b':2} >>> y {'a': 1, 'b': 2}

That’s right; the result of invoking “globals” is not only a dictionary showing us all of the global variables, but is something we can modify — and whose modifications not only reflect, but also affect, our global variables.

Now, you might be thinking, “But not all Python variables are globals.” That’s true; Python’s scoping rules describe four levels: Local, Enclosing, Global, and Builtins.  Normally, assignment creates/updates either a global variable or a local one.  How does Python know which one to use?

The answer is pretty simple, actually: When you assign inside of a function, you’re working with a local variable. When you assign outside of a function, you’re working with a global variable.  It’s pretty much as simple as that.  Sure, there are some exceptions, such as when you use the “global” or “nonlocal” keyword to force Python’s hand, and create/update a variable in a non-local scope when you’re within a function.   But the overwhelming majority of the time, you can assume that assignment within a function creates or updates a local variable, and outside of a function creates or updates a global variable.

Note that this means assignment inside of a “for” loop or “if” statement doesn’t create a local variable; it creates a global one.  It’s common for my students to believe that because they are inside of an indented block, the variable they are creating is not global.  Not true!

If you’re in a function and want to see the variables that have been created, you can use the “locals” function, which operates just like the “globals” function we looked at earlier.  Even better, you can use the “vars” function, which invokes “locals” inside of a function and “globals” outside of a function.

So the first type of variable assignment in Python is also the most common, and can be subdivided into two basic categories, local and global.

I should add that I’m only talking here about variable assignment, not attribute assignment. Attributes (i.e., anything that comes after a “.” character, as “b” in the expression “a.b”) are a totally different kettle of fish, and have their own rules and quirks.

Method 2: def

If you want to create a function in Python, you use the “def” keyword, as in:

>> def hello(name): ... return f"Hello, {name}"    # I love f-strings!

While we often think that “def” simply defines a function, I think it’s easier to think of “def” as actually doing two things:

  • Creating a new function object
  • Assigning that function object to the name immediately following the “def”

Thinking about “def” this way, as object creation + assignment, makes it easier to understand a variety of different situations that Python developers encounter.

First of all, “def” defines variables in the same scope as a simple variable assignment would. So if you’re in the global scope, you’re creating a global variable. Remember that Python doesn’t have separate namespaces for data and functions. This means that if you’re not careful, you can accidentally destroy your function or variable definition:

>> x = 100 >>> def x(): ... return "Hello!" >>> print(x*2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for *: 'function' and 'int'


> def x(): ... return "Hello!" >>> x = 100 >>> print(x()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not callable

In both of these cases, the programmer assumed that setting a variable and defining a function wouldn’t interfere with one another, with clearly disastrous results.

The fact that “def” assigns to a variable also explains why you cannot define a function multiple times in Python, each with its own function signature (i.e., argument count and types). Some people, for example, expect to be able to do the following:

>> def hello(name): ... return f"Hello, {name}" >>> def hello(first, last): ... return f"Hello, {first} {last}"

They then assume that we can invoke “hello” with either one argument (and invoke the first version) or two arguments (and invoke the second).  However, this isn’t true; the fact that “def” defines a variable means that the second “def” obliterates whatever you did with the first.  Consider this code, in which I define “x” twice in a row with two different values:

>>> x = 100 >>> x = 200

You cannot realistically think that Python will know which “x” to choose according to context, right?  In the same way, Python cannot choose a function definition for you.  It looks up the function’s name in “globals()”, gets the function object from that dictionary, and then invokes it (thanks to the parentheses).

A final aspect of seeing “def” as variable assignment has to do with inner functions — that is, functions defined within other functions.  For example:

>> def foo(): ... def bar(): ... print("I'm in bar!") ... return bar

What’s happening here?  Well, let’s assume that we execute function “foo”.    Now we’re inside of a function, which means that every time we define a variable, it’ll be a local variable, rather than a global one. The next thing we hit is “def”, which defines a variable.  Well, that means “bar” is a local variable inside of the “foo” function.

Then, in the next line of “foo”, after defining “bar”, we return a local variable to whoever called “foo”.  We can return any kind of local variable we like from a function; in this particular case, though, we’re returning a function object.

This just scratches the surface of inner functions, but the logic — we’re defining and then returning a local variable in “foo” — is consistent, and should make more sense if you think of “def” as just assigning variables.

Method 3: import

One of the most powerful aspects of Python is its very mature standard library. When you download and install Python, you have hundreds (thousands?) of modules available to you and your programs, just by using the “import” statement.

Well, guess what? “import” is defining a variable, too. Indeed, it often helps to think about “import” as defining a single variable, namely the one whose name you give when you invoke it.  For example:

>> import os

When I invoke the above, I am doing three things:

  • Python finds the module file os.py (or some variation thereof) and executes it
  • Python defines the variable “os” in the current scope
  • All of the global variables defined in the module are turned into attributes on the “os” variable.

It’s pretty easy to see that “import” defines a variable:

>> type(os) <class 'module'>

What’s a bit harder to understand is the idea that the global variables defined inside of our module all become attributes on the module object. For example, in “os.py”, there is a top-level definition of the “walk” function. Inside of the module, it’s a global variable (function).  But to whoever imports “os”, “walk” isn’t a variable.  Rather, it’s an attribute, “os.walk”.  That’s why if we want to invoke it, we need to use the complete name, “os.walk”.

That’s fine, and works pretty well overall. But if you’re going to be using “os.walk” a lot, then you might not want the overhead of saying “os.walk” each and every time. Instead, you might want to create a global variable whose value is the same as “os.walk”.  You might even say something like this:

walk = os.walk

Since we’re assigning a variable, and since we’re not inside of a function, we’re creating a global variable here.  And since “os.walk” is already defined, we’re simply adding a new reference (name) to “os.walk”.

A faster, easier, and more Pythonic way to do this is with:

from os import walk

Although to be honest, this isn’t quite the same as what I did before. That’s because “from os import walk” does find the “os.py” module file, and does execute its contents — but it doesn’t define “os” as a variable in our global namespace.  Rather, it only creates “walk” as a variable, pointing to the value of what the module knows as “os.walk”.

Does that mean that “from … import …” doesn’t actually load the module? That would be pretty silly, in that future imports of “os” would then be less effective and efficient.

Python is actually pretty clever in this case: When it executes the module file, it creates the module object in sys.modules — a dictionary whose keys are the names of the modules we have loaded.  That’s how “import” knows to import a file only once; it can run “in” on sys.modules, check to see if a module has already been loaded, and only actually import it if the module is missing from that dict.

What if the module has been loaded?  Then “import” still defines the variable in the current namespace.  Thus, if your project has “import os” in 10 different files, then only the first invocation of “import os” actually loads “os.py”.  The nine following times, you just get a variable definition that points to the module object.

In the case of “from-import”, the same thing happens, except that instead of assigning the module’s name as a variable in the current namespace, you get the name(s) you explicitly asked for.

In theory, you can use “import” and “from-import” inside of a function, in which case you’ll define a local variable.  I’m sure that there is some use for doing so, but I can’t think of what it would be.  However, here Python is also remarkably consistent, allowing you to do things that wouldn’t necessarily be useful.

In Python 2.7, there were two different implementations of the “pickle” module, one written in Python (“pickle”) and one written in C (“cPickle”).  The latter executed much faster, but wasn’t available on all platforms.  It was often recommended that programmers do this, to get the best possible version:

try: import cPickle as pickle except ImportError: import pickle >>> pickle <module 'cPickle' from '/usr/local/Cellar/python@2/2.7.14_3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/cPickle.so'>

This technique (which isn’t necessary with Python 3) only worked because “import” is executed at runtime, along with other Python code, and because “import” defines a variable.  What the above code basically says is, “Try to load cPickle but if you succeed, use the name pickle.   If you can’t do that, then just use the regular ol’ pickle library.”

Method 4: class

The fourth and final way to define a variable in Python is with the “class” statement, which we use to create (big surprise) classes.  We can create a class as follows:

class Foo(object): def __init__(self, x): self.x = x

Newcomers to Python often think of classes as blueprints, or plans, or descriptions, of the objects that will be created. But they aren’t that at all — classes in Python are objects, just like anything else.  And since they’re objects, they have both a type (which we can see with the “type” function) and attributes (which we can see with the “dir” function):

>> type(Foo) <class 'type'> >>> dir(Foo) ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__']

The “class” keyword defines a new variable — Foo, in this case — which is assigned to a class object. A class object is just like everything else, except that it is callable, meaning that we can execute it (with parentheses) and get a new object back:

>> Foo(10) <__main__.Foo object at 0x10da5fc18>

Defining a class means that we’re defining a variable — which means that if we have a previous class of the same name, we’re going to overwrite it, just as was the case with functions.

Class objects, like nearly all objects in Python, can have attributes added to them at runtime.  We can thus say:

>> Foo.abc = 100

And now I’ve added the “abc” attribute to the “Foo” object, which is a class.  I can also do that inside of a class definition:

>>> class Foo(object): ... abc = 100 ... def __init__(self, x): ... self.x = x >>> Foo.abc 100

How is this possible?  Haven’t we, in our class definition, created a variable “abc”?  Nope — it looks like a variable, but it’s actually an attribute on the “Foo” class.  And it must be an attribute, not only because we can retrieve it with “Foo.abc” later, but because all assignments inside of a class definition aren’t creating variables, but rather attributes.  Inside of “class Foo”, we’re thus creating two attributes on the class — not only “abc”, but also “__init__”, the method.

Does this seem familiar, the idea global variables we define in one context are seen as attributes in another context?  Yes, that’s right — we saw the same thing with modules.  Global variables defined in a module are seen, once the module is imported, as attributes on that module.

You can thus think of classes, in some ways, as modules that don’t require a separate file. There are other issues as well, such as the difference between methods and functions — but in general, understanding that whatever you do inside of a class definition is treated as a  module can help to improve your understanding.

I don’t do this very often, but what does it mean, then, if I define a class within another class?  In some languages, such “inner classes” are private, and only available for use within the outer class. Not so in Python; since “class” defines a variable and any variable assignments within a class actually create attributes, an inner class is available to us via the outer class’s attribute:

>>> class Foo(object): ... def __init__(self, x): ... self.x = x ... class Bar(object): ... def __init__(self, y): ... self.y = y ... >>> b = Foo.Bar(20) >>> b.y 20

Variables in Python aren’t just for storing data; they store our functions, modules, and classes, as well.  Understanding how various keywords define and update these variables can really help to understand what Python code is doing — and also provides for some cool tricks you can use in your own code.

The post Four ways to assign variables in Python appeared first on Lerner Consulting Blog.

Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: Weekly report #151

Planet Debian - Tue, 2018-03-20 15:59

Here's what happened in the Reproducible Builds effort between Sunday March 11 and Saturday March 17 2018:

Upcoming events Patches submitted Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (168)
  • Emmanuel Bourg (2)
  • Pirate Praveen (1)
  • Tiago Stürmer Daitx (1)

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Categories: FLOSS Project Planets

Cutelyst 2 released with HTTP/2 support

Planet KDE - Tue, 2018-03-20 15:51

Cutelyst the Qt/C++ web framework just got a major release update, around one and half year ago Cutelyst v1 got the first release with a stable API/ABI, many improvements where made during this period but now it was time to clean up the mistakes and give room for new features.

Porting applications to v2 is mostly a breeze, since most API changes were done on the Engine class replacing 1 with 2 and recompiling should be enough on most cases, at least this was the case for CMlyst, Stickyst and my personal applications.

Due cleanup Cutelyst Core module got a size reduction, and WSGI module increased a bit due the new HTTP/2 parser. Windows MSVC was finally able to build and test all modules.

WSGI module now defaults to using our custom EPoll event loop (can be switched back to Qt’s default one with an environment variable), this allows for a steady performance without degradation when an increased number of simultaneous connections is made.

Validators plugins by Matthias got their share of improvements and a new password quality validator was added, plus manual pages for the tools.

The HTTP/2 parser adds more value to our framework, it’s binary nature makes it very easy to implement, in two days most of it was already working but HTTP/2 comes with a dependency, called HPACK which has it’s own RFC. HPACK is the header compression mechanism created for HTTP/2 because gzip compression as used in SPDY had security issues when on HTTPS called CRIME .

The problem is that HPACK is not very trivial to implement and it took many hours and made KCalc my best friend when converting hex to binary to decimal and what not…

Cutelyst HTTP/2 parser passes all tests of a tool named h2spec, using h2load it even showed more requests per second than HTTP/1 but it’s complicated to benchmark this two different protocols specially with different load tools.

Upgrading from HTTP/1.1 is supported with a switch, as well as enabling H2 on HTTPS using the ALPN negotiation (which is the only option browsers support), H2C or HTTP/2 in clear text is also supported but it’s only useful if the client can connect with previous knowledge.

If you know HTTP/2 your question is: “Does it support server push?”. No it doesn’t at the moment, SERVER_PUSH is a feature that allows the server to send CSS, Javascript without the browser asking for it, so it can avoid the request the browser would do, however this feature isn’t magical, it won’t make slow websites super fast , it’s also hard to do right, and each browser has it’s own complicated issues with this feature.

I strongly recommend reading this https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/ .

This does not mean SERVER_PUSH won’t be implemented, quite the opposite, due the need to implement it properly I want more time to study the RFC and browsers behavior so that I can provide a good API.

I have also done some last minute performance improvements with the help of KDAB Hotspot/perf, and I must say that the days of profiling with weird/huge perf command line options are gone, awesome tool!

Get it! https://github.com/cutelyst/cutelyst/archive/v2.0.0.tar.gz

If you like it please give us a star on GitHub!

Have fun!

Categories: FLOSS Project Planets

Nikola: Nikola v7.8.13 is out! (maintenance release)

Planet Python - Tue, 2018-03-20 13:50

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v7.8.13. This is a maintenance release for the v7 series.

Future releases in the v7 series are going to be small maintenance releases that include bugfixes only, as work on v8.0.0 is underway.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/


Install using pip install Nikola or download tarballs on GitHub and PyPI.

  • Add new Thai translation by Narumol Hankrotha and Jean Jordaan (v8 backport)
  • Hide “Incomplete language” message for overrides of complete languages
  • Restore ability to override messages partially

(Note: for a while, this post said v7.8.14 was released. We apologise for the confusion.)

Categories: FLOSS Project Planets

Import Python: #167: Detecting Resonant Frequency, Intro to Blockchain, EOL of Python 2.7 and more

Planet Python - Tue, 2018-03-20 13:35
Worthy Read
Running End-to-End Tests on Kubernetes Integrating the build and deployment of up to 40 websites across the world is challenging. This blog talks about how one team solved real world CI/CD problems using Kubernetes and GoCD.
kubernetes, advert
Breaking a Wine Glass in Python By Detecting the Resonant Frequency In today’s post, I walk through the journey of writing a Python program to break wine glasses on demand, by detecting their resonant frequency. Along the way we’ll 3D print a cone, learn about resonant frequencies, and see why I needed an amplifier and compression driver. So, let’s get started.
project, sound engineering
A Practical Introduction to Blockchain with Python. Blockchain is arguably one of the most significant and disruptive technologies that came into existence since the inception of the Internet. It's the core technology behind Bitcoin and other crypto-currencies that drew a lot of attention in the last few years. As its core, a blockchain is a distributed database that allows direct transactions between two parties without the need of a central authority. This simple yet powerful concept has great implications for various institutions such as banks, governments and marketplaces, just to name a few. Any business or organization that relies on a centralized database as a core competitive advantage can potentially be disrupted by blockchain technology.
Jan 1st 2020 is End of Life for Python 2.7 Curator's note - Lot of banks and financial companies are not going to upgrade and be happy to pay vendors for security updates.
How to list the most common words from text corpus using Scikit-Learn? Frequently we want to know which words are the most common from a text corpus sinse we are looking for some patterns.
machine learning, scikit
4 Ways to Improve Your DevOps Testing - Free eBook Read the 4-part eBook to learn how to detect problems earlier in your DevOps testing processes by Proactively responding to your monitoring software, Integrating your security reqs in your initial development, Replicating real-world conditions to find unexpected variables, Performing continuous testing to uncover points of failure.
advert, devops
How I implemented iPhone X’s FaceID using Deep Learning in Python. Reverse engineering iPhone X’s new unlocking mechanism.
Memory efficiency of parallel IO operations in Python Python allows for several different approaches to parallel processing. The main issue with parallelism is knowing its limitations. We either want to parallelise IO operations or CPU-bound tasks like image processing. The first use case is something we focused on in the recent Python Weekend* and this article provides a summary of what we came up with.
parallel processing
Python 3.7’s new builtin breakpoint — a quick tour Debugging in Python has always felt a bit “awkward” compared with other languages I’ve worked in. Introducing breakpont()
Python Programming Exercises Book It's free.
Markdown Descriptions on PyPI - Dustin Ingram I’m really excited to say that as of today, PyPI supports rendering project descriptions from Markdown! This has been a oft-requested feature and after lots of work (including the creation of PEP 566) it is now possible, without translating Markdown to rST or any other hacks!
python-itertools itertools.accumulate(iterable[, func])
Agile database integration tests with Python, SQLAlchemy and Factory Boy So you are interested in testing, aren’t you? Not doing it yet? That’s the right time to start then! In this little example, I’m going to show a possible procedure to easily test your piece of code that interacts with a database.
Deploy TensorFlow models – Towards Data Science Super fast and concise tutorial
Stack Overflow Developer Survey 2018 - See how Python is doing. This year, over 100,000 developers told us how they learn, build their careers, which tools they’re using, and what they want in a job.

Senior Python Developer at causaLens Remote (Europe) We are looking for a motivated and high-achieving Senior Python Developer based anywhere in Europe to join the team working on an exciting new Big Data/Machine Learning platform. This is a full time placement with significant opportunities for growth and advancement as one of the first employees of the company.

Senior Python Developer (Crawling Engineer) - Remote Contractor at YMN LTD. Turkey 4+ years of software development experience. -Scrapy experience is a big plus.

black - 958 Stars, 15 Fork The uncompromising Python code formatter.
makesite - 216 Stars, 16 Fork Simple, lightweight, and magic-free static site/blog generator for Python coders
thug-memes - 115 Stars, 4 Fork Command line Thug Meme generator written in Python.
requests-core - 79 Stars, 3 Fork Experimental lower-level async HTTP client for Requests 3.0
white - 78 Stars, 1 Fork The Black code formatter, but brighter (PEP8–inspired).
socialsentiment - 40 Stars, 2 Fork Sentiment Analysis application created with Python and Dash, hosted at socialsentiment.net.
rose - 12 Stars, 0 Fork Analyse all kinds of data for a TV series.
onegram - 5 Stars, 0 Fork A simplistic api-like instagram bot powered by requests.
convert-outlook-msg-file - 5 Stars, 0 Fork Python library to convert Microsoft Outlook .msg files to .eml/MIME message files.
Siamese-LSTM - 4 Stars, 1 Fork Siamese LSTM for evaluating semantic similarity between sentences of the Quora Question Pairs Dataset.
MusicTag - 3 Stars, 0 Fork MusicTag allows you to download from YouTube all the music you want and automatically set the ID3 tags.
Categories: FLOSS Project Planets

Manifesto: Write once, chat anywhere: the easy way to create chatbots with Drupal

Planet Drupal - Tue, 2018-03-20 13:08
Historically, we have many ways of serving content digitally: through a website, through mobile apps, via social media, RSS feeds, RESTful APIs (allowing content to be consumed by other apps, websites etc), and email. Now we have a new player in the game: chatbots and personal assistants. Conversational interfaces promise a more natural way for. Continue reading...
Categories: FLOSS Project Planets

Dries Buytaert: Responsive accessible HTML tables

Planet Drupal - Tue, 2018-03-20 12:56

A very thorough explanation of how to build responsive accessible HTML tables. I'd love to compare this with Drupal's out-of-the-box approach to evaluate how we are doing.

Categories: FLOSS Project Planets

Drop Guard: How updating a Critical Drupal update costed a Drupal agency 1,750.00€ minimum

Planet Drupal - Tue, 2018-03-20 12:30
How updating a Critical Drupal update costed a Drupal agency 1,750.00€ minimum This article is meant to be a further step to raise agencies’ and also customers’ awareness of the huge expenses when it comes to update management in Drupal.

It’s not about promoting a single solution or product. It’s about getting more sensitive for processes which could or should be way smarter and efficient than they are in most companies right now. It’s about creating processes which are ressource friendly, customer focused and support automation.


Drupal Business Automation Drupal Planet
Categories: FLOSS Project Planets

Aten Design Group: A Tool for Estimating College Tuition

Planet Drupal - Tue, 2018-03-20 11:50

Every generation is a little different. They bring different beliefs and perspectives tied to their upbringing. Generation Z, those born in 1995 or later, have been brought up in a world where information is at their fingertips. They are digitally connected through laptops, tablets and smartphones. If they have a question, all they have to do is pull out a device and ask Siri, Google or Alexa. They have a drive for getting information, learning new things and making an impact.

Gen Z students interested in Stanford are no different. We've often heard Stanford describe their students as adventurous, highly motivated, and passionate in their desire to come together and deepen their learning. During their time at Stanford, they will discover what motivates them and how they can impact the world after graduation. This is true of full-time Stanford students, as well as visiting students participating in the Summer Session. Stanford’s Summer Session provides current Stanford students, high schoolers and students at other universities with the unique opportunity to complete a summer at Stanford – getting a full experience of the coursework and college life Stanford offers.

The program isn’t cheap, and not every student can afford it.

We recently worked with the Stanford Summer Session communications team to combine the high school and college level websites into one site with a fresh design and structure. We kicked off the project with a discovery phase that included user surveys and stakeholder interviews.

The Problem Image 1 The old tuition tables from Stanford Summer Session. Students had to navigate the fees and add all the relevant fees up on their own.

As we learned from the user surveys, students are cost-aware individuals, just like Gen Zers all over the nation. Corey Seemiller and Meghan Grace in Generation Z Goes to College couldn’t have put it any better: “Anxiety over being able to afford a college education is forefront on the minds of these students.” Prospective Summer Session students were specifically looking for ways to make the program fit within their budget. This message was amplified by the interviews we conducted with the Summer Session staff, who noted that tuition information was buried, and in some cases, scattered throughout the site.

Through more discussion with stakeholders and students, we learned that students struggled with deciphering the available tuition and fee tables, inhibiting them from learning how much the program would cost (see Image 1). Additionally, as fees and tuition changed every year, staff found it painful and time-consuming to update information in the old system.

The Solution

So what did we do to help these students out? We created a one-stop-shop to get an estimate of the cost – welcome the tuition and fees calculator. By answering a few simple questions, students can get a true estimate of the total cost for their summer at Stanford. If they’re not happy with the results, they can tweak their answers to help make the program fit their budget.

As an added bonus, the system makes it really easy for staff to update the costs each year! On the admin side, each question and list of answers are fully editable. The dollar amounts can be changed for each student type to ensure the estimate will stay accurate for each coming school year. On the front end, the questions are organized, options are shown or hidden depending on previous answers, and the total amount is tallied on the final screen. Vue.js allowed us to build a complex interface simply, while making the static data more engaging.

How We Got There

Hopefully you’re in love with the solution we came up with, and maybe you’re wondering how we got here. Well, you already know we did some research – surveyed current students and interviewed staff – to learn about the problems they were facing. We then brought our two teams together to brainstorm ideas using the Core Model exercise.

We took a few minutes to sketch out the proposed solution.

These ideas were then further refined in wireframes and design:

And finally developed in Drupal 8.

The tuition and fees calculator was designed to provide Gen Zs with the information they need to financially plan their summer at Stanford, but all generations can benefit from it. Since launch in November 2017, the calculator has appeared in the top 10 visited pages of the redesigned site, with an average of 3000 pageviews per month.

Categories: FLOSS Project Planets