FLOSS Project Planets

PyCharm: PyCharm 2017.3 EAP 6: Improved interpreter selection interface

Planet Python - Thu, 2017-10-19 15:07

The latest and greatest early access program (EAP) version of PyCharm is now available from our website:

Get PyCharm 2017.3 EAP 6

Improved Python interpreter selection interface

In the previous PyCharm 2017.3 EAP 5 build we introduced a new UI for configuring project interpreters in your existing projects. To recap:

  • The project interpreter dropdown in Settings | Project | Project interpreter now has only the virtualenvs you have specifically configured for that particular project, and virtualenvs that you’ve specifically configured to be shared between projects: 
  • Under the gear icon you will find ‘Add Local’ and ‘Add Remote’. Local interpreters are those that run directly on your operating system; remote interpreters include Docker and Vagrant in addition to any remote computers you connect to through SSH:
  • A new “Add local interpreter” dialog makes it much easier to configure a new virtualenv or conda environment and make it available for other projects:

This build has a new UI for configuring project interpreters during the project creation.
Go to File | New project…:

In the New Project dialog you will get a similar experience to the one described above for the existing project interpreter selection. Here, if you’d like to reuse an existing virtualenv or system interpreter, you can select it under ‘Existing interpreter’. And if you create a new virtualenv and would like to reuse it in other projects in the future, you can check the ‘Make available to all projects’ option and it will appear in the dropdown on the project interpreter page in all of your other projects.

Install PyCharm with snap packages

Starting from this EAP build, we are offering an alternative installation method using snap packages for Ubuntu users. Snaps are quick to install, safe to run, easy to manage and they are updated automatically in the background every day, so you always have the newest builds as soon as they’re out. If you are running Ubuntu 16.04 LTS or later, snap is already preinstalled, so you can begin using snaps from the command line right away.

Installing PyCharm Professional or Community Edition 2017.3 EAP is now as easy as this simple command (please make sure you use just one option from square brackets):

$sudo snap install [pycharm-professional | pycharm-community] --classic --edge

This command will install PyCharm Professional or Community from the “Edge” channel where we store EAP builds. Please note, the snap installation method is experimental, and currently, we officially distribute only PyCharm 2017.3 EAP in the Edge channel.

Depending on which snap you’ve installed, you can run your new PyCharm 2017.3 EAP with:

$[pycharm-professional | pycharm-community]

You can now use other snap commands for managing your snaps. The most frequently used commands are:

  • snap list – to list all the installed snaps,
  • snap refresh –edge – to manually update a snap from the edge channel
  • sudo snap remove – to remove a snap from your system
  • sudo snap revert – To revert a snap to the previously installed version should there be anything wrong with the current version

Snap supports auto updates from the stable channel only. Since we distribute 2017.3 EAP builds in the edge channel, to update to the next EAP build, you’ll need to manually invoke:

$snap refresh [pycharm-professional | pycharm-community] --edge

Read more about how snaps work and let us know about your experience with using snap for installing and updating PyCharm 2017.3 EAP so we can consider whether snaps might be something we can utilize for our official stable releases. You can give us your feedback on Twitter or in the comments to this blog post.

Other improvements in this build:
  • Foreign Data Wrappers support for Postgres (PyCharm Pro only)
  • New folder-based grouping for data sources (PyCharm Pro only)
  • Completion for environment variables in the REST client (PyCharm Pro only)
  • Improved JavaScript support (PyCharm Pro only)
  • Various fixes for the Python debugger, console and Python code insight
  • And more, have a look at the release notes for details

If these features sound interesting to you, try them yourself:

Get PyCharm 2017.3 EAP 6

As a reminder, PyCharm EAP versions:

  • Are free, including PyCharm Professional Edition EAP
  • Will work for 30 days from being built, you’ll need to then update when the build expires

If you run into any issues with this version, or any other version of PyCharm, please let us know on our YouTrack. If you have any suggestions or remarks, you can reach us on Twitter, or by commenting on the blog.

Categories: FLOSS Project Planets

Drupal Association blog: Drupal Association Board Meeting Summary - 28 September, 2017

Planet Drupal - Thu, 2017-10-19 14:38

On 28 September 2017, the Drupal Association held its third open board meeting of the year where community members listened in via zoom and in person. You can find the meeting minutes, board materials, and meeting recording here.

The board meeting was kicked off by an update from Dries Buytaert, followed by an Executive update from Megan Sanicki, Executive Director, and a Drupal.org update from Tim Lehnen, Director of Engineering. We also thanked and celebrated Tiffany Farriss, Vesa Palmu, and Jeff Walpole whose terms on the board end in November.

Dries Buytaert moving from Chairman to Founding Director position

One of the key announcements made during the meeting came from Dries Buytaert, who announced that in response to the Community Discussions findings, he is stepping down from the Drupal Association Chairman position. He will remain on the board in the Founding Director position.  This will go into effect in November when board seats expire and Adam Goodman will step into the role as interim Chairman, which is also in response to the community’s request for a neutral, outside expert to lead the board. To learn more about the Community Discussions, go here.

Adam Goodman is a leadership professor from Northwest University in Chicago, Illinois, USA. He's advised the Drupal Association on and off for the past 8 years, helping us evolve from a volunteer board to a strategic board. In this role, Adam will further evolve the board so it can orient itself around a new chairman structure.

Since Adam is a paid consultant, the Drupal Association needs to change its bylaws to allow Adam to sit on the board and be paid for his service. In addition to this change, we are doing a general update of the bylaws to include:

  • Eliminate non-existent committees like the HR committee

  • Modernize the tools we can use for online voting. Today we can use teleconferencing, but we also need to be able to use video conferencing.

To learn more about this board meeting, please watch the recording and stay tuned for an update on other improvements we are making in response to the community’s input.

Categories: FLOSS Project Planets

Steinar H. Gunderson: Introducing Narabu, part 2: Meet the GPU

Planet Debian - Thu, 2017-10-19 13:45

Narabu is a new intraframe video codec. You may or may not want to read part 1 first.

The GPU, despite being extremely more flexible than it was fifteen years ago, is still a very different beast from your CPU, and not all problems map well to it performance-wise. Thus, before designing a codec, it's useful to know what our platform looks like.

A GPU has lots of special functionality for graphics (well, duh), but we'll be concentrating on the compute shader subset in this context, ie., we won't be drawing any polygons. Roughly, a GPU (as I understand it!) is built up about as follows:

A GPU contains 1–20 cores; NVIDIA calls them SMs (shader multiprocessors), Intel calls them subslices. (Trivia: A typical mid-range Intel GPU contains two cores, and thus is designated GT2.) One such core usually runs the same program, although on different data; there are exceptions, but typically, if your program can't fill an entire core with parallelism, you're wasting energy. Each core, in addition to tons (thousands!) of registers, also has some “shared memory” (also called “local memory” sometimes, although that term is overloaded), typically 32–64 kB, which you can think of in two ways: Either as a sort-of explicit L1 cache, or as a way to communicate internally on a core. Shared memory is a limited, precious resource in many algorithms.

Each core/SM/subslice contains about 8 execution units (Intel calls them EUs, NVIDIA/AMD calls them something else) and some memory access logic. These multiplex a bunch of threads (say, 32) and run in a round-robin-ish fashion. This means that a GPU can handle memory stalls much better than a typical CPU, since it has so many streams to pick from; even though each thread runs in-order, it can just kick off an operation and then go to the next thread while the previous one is working.

Each execution unit has a bunch of ALUs (typically 16) and executes code in a SIMD fashion. NVIDIA calls these ALUs “CUDA cores”, AMD calls them “stream processors”. Unlike on CPU, this SIMD has full scatter/gather support (although sequential access, especially in certain patterns, is much more efficient than random access), lane enable/disable so it can work with conditional code, etc.. The typically fastest operation is a 32-bit float muladd; usually that's single-cycle. GPUs love 32-bit FP code. (In fact, in some GPU languages, you won't even have 8-, 16-bit or 64-bit types. This is annoying, but not the end of the world.)

The vectorization is not exposed to the user in typical code (GLSL has some vector types, but they're usually just broken up into scalars, so that's a red herring), although in some programming languages you can get to swizzle the SIMD stuff internally to gain advantage of that (there's also schemes for broadcasting bits by “voting” etc.). However, it is crucially important to performance; if you have divergence within a warp, this means the GPU needs to execute both sides of the if. So less divergent code is good.

Such a SIMD group is called a warp by NVIDIA (I don't know if the others have names for it). NVIDIA has SIMD/warp width always 32; AMD used to be 64 but is now 16. Intel supports 4–32 (the compiler will autoselect based on a bunch of factors), although 16 is the most common.

The upshot of all of this is that you need massive amounts of parallelism to be able to get useful performance out of a CPU. A rule of thumb is that if you could have launched about a thousand threads for your problem on CPU, it's a good fit for a GPU, although this is of course just a guideline.

There's a ton of APIs available to write compute shaders. There's CUDA (NVIDIA-only, but the dominant player), D3D compute (Windows-only, but multi-vendor), OpenCL (multi-vendor, but highly variable implementation quality), OpenGL compute shaders (all platforms except macOS, which has too old drivers), Metal (Apple-only) and probably some that I forgot. I've chosen to go for OpenGL compute shaders since I already use OpenGL shaders a lot, and this saves on interop issues. CUDA probably is more mature, but my laptop is Intel. :-) No matter which one you choose, the programming model looks very roughly like this pseudocode:

for (size_t workgroup_idx = 0; workgroup_idx < NUM_WORKGROUPS; ++workgroup_idx) { // in parallel over cores char shared_mem[REQUESTED_SHARED_MEM]; // private for each workgroup for (size_t local_idx = 0; local_idx < WORKGROUP_SIZE; ++local_idx) { // in parallel on each core main(workgroup_idx, local_idx, shared_mem); } }

except in reality, the indices will be split in x/y/z for your convenience (you control all six dimensions, of course), and if you haven't asked for too much shared memory, the driver can silently make larger workgroups if it helps increase parallelity (this is totally transparent to you). main() doesn't return anything, but you can do reads and writes as you wish; GPUs have large amounts of memory these days, and staggering amounts of memory bandwidth.

Now for the bad part: Generally, you will have no debuggers, no way of logging and no real profilers (if you're lucky, you can get to know how long each compute shader invocation takes, but not what takes time within the shader itself). Especially the latter is maddening; the only real recourse you have is some timers, and then placing timer probes or trying to comment out sections of your code to see if something goes faster. If you don't get the answers you're looking for, forget printf—you need to set up a separate buffer, write some numbers into it and pull that buffer down to the GPU. Profilers are an essential part of optimization, and I had really hoped the world would be more mature here by now. Even CUDA doesn't give you all that much insight—sometimes I wonder if all of this is because GPU drivers and architectures are meant to be shrouded in mystery for competitiveness reasons, but I'm honestly not sure.

So that's it for a crash course in GPU architecture. Next time, we'll start looking at the Narabu codec itself.

Categories: FLOSS Project Planets

Drupal Association blog: Status of Speaker Agreement Violation

Planet Drupal - Thu, 2017-10-19 13:36

Our community does amazing things together and they deserve to have the best working environment for collaboration. At the Drupal Association, we strive to create these open and collaborative environments at DrupalCon and on Drupal.org.

We recently became aware that a community member violated our speaker agreement at DrupalCon. The Drupal Association removed the video from the DrupalCon event site and the Drupal Association YouTube channel and we are determining additional actions. The community member acknowledged that they broke the speaker agreement and is cooperating with the Drupal Association as we take action.

We apologize that this content was shared. It didn’t create the best environment for our community to thrive and we will do better. We are looking at ways to enhance our process to avoid situations like this from happening again.

We also heard from the community discussion findings that were provided this summer, that the community needs a better understanding of the roles and responsibilities for volunteers that work on Drupal Association programs. The Drupal Association is working to define what is expected of each role and policies for managing situations when expectations are not met. We are working on developing a clear outline of these and you can expect to see them finalized by February 2018.

Categories: FLOSS Project Planets

Redfin Solutions: Pulling Salesforce Data in as Taxonomy Terms in D7

Planet Drupal - Thu, 2017-10-19 12:27
Pulling Salesforce Data in as Taxonomy Terms in D7

Salesforce Suite is a group of modules for Drupal that allows for pulling data from Salesforce into Drupal, as well as pushing data from Drupal to Salesforce. The module api provides some very useful hooks, including the _salesforce_pull_entity_presave hook implemented by the Salesforce Pull module. In this blog post, we’ll look at using that hook to pull three Salesforce custom fields (select lists) into Drupal as taxonomy terms in three vocabularies.

Christina October 19, 2017
Categories: FLOSS Project Planets

Norbert Preining: Analysing Debian packages with Neo4j

Planet Debian - Thu, 2017-10-19 10:21

I just finished the presentation at the Neo4j Online Meetup on getting the Debian UDD into a Neo4j graph database. Besides the usual technical quibbles it did work out quite well.

The code for pulling the data from the UDD, as well as converting and importing it into Neo4j is available on Github Debian-Graph. The slides are also available on Github: preining-debian-packages-neo4j.pdf.

There are still some things I want to implement, time permitting, because it would be a great tool for better integration for Debian. In any case, graph databases are lots of fun to play around.

Categories: FLOSS Project Planets

Mediacurrent: Announcing Mediacurrent Labs

Planet Drupal - Thu, 2017-10-19 09:55

Mediacurrent’s commitment to innovation has been clear since our founding ten years ago.

Categories: FLOSS Project Planets

Kubuntu 17.10 Artful Aardvark is released

Planet KDE - Thu, 2017-10-19 09:51

Kubuntu 17.10 has been released, featuring the beautiful Plasma 5.10 desktop from KDE.

Codenamed “Artful Aardvark”, Kubuntu 17.10 continues our proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

The team has been hard at work through this cycle, introducing new features and fixing bugs.

Under the hood, there have been updates to many core packages, including a new 4.13-based kernel, KDE Frameworks 5.38, Plasma 5.10.5 and KDE Applications 17.04.3


Kubuntu has seen some exciting improvements, with newer versions of Qt, updates to major packages like Krita, Kdenlive, Firefox and LibreOffice, and stability improvements to KDE Plasma.

For a list of other application updates, upgrading notes and known bugs be sure to read our release notes.

Download 17.10 or read about how to upgrade from 17.04.

Categories: FLOSS Project Planets

Flocon de toile | Freelance Drupal: Filter content by year with Views on Drupal 8

Planet Drupal - Thu, 2017-10-19 06:00
It is not uncommon to propose to filter contents according to dates, and in particular depending on the year. How to filter content from a view based on years from a date field? We have an immediate solution using the Search API module coupled with Facets. This last module allows us very easily to add a facet, to a view, based on a date field of our content type, and to choose the granularity (year, month, day) that we wish to expose to the visitors. But if you do not have these two modules for other reasons, it may be a shame to install them just for that. We can get to our ends pretty quickly with a native Views option, the contextual filter. Let's discover in a few images how to get there.
Categories: FLOSS Project Planets

Daniel Pocock: FOSDEM 2018 Real-Time Communications Call for Participation

Planet Debian - Thu, 2017-10-19 04:33

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2018 takes place 3-4 February 2018 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs
Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Sunday, 4 February 2018. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 30 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real Time Communications devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 3 November. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators based on the received proposals. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists
Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 February 2018. XMPP Summit web site - please join the mailing list for details.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 2 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.

Contact

For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

Categories: FLOSS Project Planets

Lintel Technologies: What is milter?

Planet Python - Thu, 2017-10-19 04:05

Every one gets tons of email these days. This includes emails about super duper offers from amazon to princess and wealthy businessmen trying to offer their money to you from some African country that you have never heard of. In all these emails in your inbox there lies one or two valuable emails either from your friends, bank alerts, work related stuff. Spam is a problem that email service providers are battling for ages. There are a few opensource spam fighting tools available like SpamAssasin or SpamBayes.

What is milter ?

Simply put – milter is mail filtering technology. Its designed by sendmail project. Now available in other MTAs also. People historically used all kinds of solutions for filtering mails on servers using procmail or MTA specific methods. The current scene seems to be moving forward to sieve. But there is a huge difference between milter and sieve. Sieve comes in to picture when mail is already accepted by MTA and had been handed over to MDA. On the other hand milter springs into action in the mail receiving part of MTA. When a new connection is made by remote server to your MTA, your MTA will give you an opportunity to accept of reject the mail every step of the way from new connection, reception of each header, and reception of body.

simplified milter stages

" data-medium-file="http://s3.amazonaws.com/lintel-blogs-static-files/wp-content/uploads/2017/10/13131356/milter-stages-187x300.png" data-large-file="http://s3.amazonaws.com/lintel-blogs-static-files/wp-content/uploads/2017/10/13131356/milter-stages.png" class="size-full wp-image-868" src="http://s3.amazonaws.com/lintel-blogs-static-files/wp-content/uploads/2017/10/13131356/milter-stages.png" alt="milter stages " width="512" height="823" srcset="http://s3.amazonaws.com/lintel-blogs-static-files/wp-content/uploads/2017/10/13131356/milter-stages.png 512w, http://s3.amazonaws.com/lintel-blogs-static-files/wp-content/uploads/2017/10/13131356/milter-stages-187x300.png 187w" sizes="(max-width: 512px) 100vw, 512px" />
milter protocol various stages

The above picture depicts simplified version of milter protocol working. Full details of milter protocol can be found here https://github.com/avar/sendmail-pmilter/blob/master/doc/milter-protocol.txt  . Not only filtering; using milter, you can also modify message or change headers.

HOW DO I GET STARTED WITH CODING MILTER PROGRAM ?

If you want to get started in C you can use libmilter.  For Python you have couple of options:

  1. pymilter –  https://pythonhosted.org/milter/
  2. txmilter – https://github.com/flaviogrossi/txmilter

Postfix supports milter protocol. You can find every thing related to postfix’s milter support in here – http://www.postfix.org/MILTER_README.html

WHY NOT SIEVE WHY MILTER ?

I found sieve to be rather limited. It doesn’t offer too many options to implement complex logic. It was purposefully made like that. Also sieve starts at the end of mail reception process after mail is already accepted by MTA.

Coding milter program in your favorite programming language gives you full power and allows you to implement complex , creative stuff.

WATCHOUT!!!

When writing milter programs take proper care to return a reply to MTA quickly. Don’t do long running tasks in milter program when the MTA is waiting for reply. This will have crazy side effects like remote parties submitting same mail multiple time filling up your inbox.

The post What is milter? appeared first on Lintel Technologies Blog.

Categories: FLOSS Project Planets

Talk Python to Me: #134 Python in Climate Science

Planet Python - Thu, 2017-10-19 04:00
What is the biggest challenge facing human civilization right now? Fake news, poverty, hunger? Yes, all of those are huge problems right now. Well, if climate change kicks in, you can bet it will amplify these problems and more. That's why it's critical that we get answers and fundamental models to help understand where we are, where we are going, and how we can improve things.
Categories: FLOSS Project Planets

Python Anywhere: Response times now available in webapp logs

Planet Python - Thu, 2017-10-19 03:46

we deployed a new version this morning. Most of the work was on a sekrit hidden feature that we can't talk about yet (oooo) but there is one small thing we've added that we hope you find useful: we've added response times to your webapp access logs:

you can see them at the end of each line, in the format response_time=N.NNN.

We hope that you'll find these useful if you ever have to chase down performance issues. Let us know what you think!

Categories: FLOSS Project Planets

Steve Holden: What's In a Namespace?

Planet Python - Thu, 2017-10-19 01:24
Python programmers talk about namespaces a lot. The Zen of Python* ends with
Namespaces are one honking great idea—let’s do more of those!and if Tim Peters thinks namespaces are such a good idea, who am I to disagree?

Resolution of Unqualified NamesPython programmers learned at their mothers' knees that Python looks up unqualified names in three namespaces—first, the local namespace of the currently-executing function or method; second, the global namespace of the module containing the executing code; third and last, the built-in namespace that holds the built-in functions and exceptions. So, it makes sense to understand the various namespaces that the interpreter can use. Note that when we talk about name resolution we are talking about how a value is associated with an unadorned name in the code.

In the main module of a running program there is no local namespace. A name must be present in either the module's global namespace or, if not there, in the built-in namespace that holds functions like len, the standard exceptions, and so on. In other words, when __name__ == '__main__' the local and global namespaces are the same.

When the interpreter compiles a function it keeps track of names which are bound inside the function body (this includes the parameters, which are established in the local namespace before execution begins) and aren't declared as either global or (in Python 3) nonlocal.  Because it knows the local names the interpreter can assign them a pre-defined place in the stack frame (where local data is kept for each function call), and does not generally need to perform a lookup. This is the main reason local access is faster than global access.

Although the interpreter identifies local names by the presence of bindings within a function body, there is nothing to stop you writing code that references the names before they are bound. Under those circumstances you will see an UnboundLocalError exception raised with a message like "local variable 'b' referenced before assignment".

For non-local names, something very like a dictionary lookup takes place first in the module's global namespace and then in the built-ins. If neither search yields a result then the interpreter raises a NameError exception with a message like "name 'nosuch' is not defined."

Resolution of Qualified NamesIn qualified names (those consisting of a sequence of names or expressions delimited by dots  such as os.path.join) starts by locating the first object's namespace (in this case os) in the standard way described above. Thereafter the mechanism can get complex because like many Python features you can control how it works for your own objects by defining __getattr__ and/or __getattribute__ methods, and because descriptors (primarily used in accessing properties) can cloud the picture.

In essence, though, the mechanism is that the interpreter, having located the object bound to the unqualified name, then makes a gettatr call for the second name (in this case, path) in that namespace, yielding another object, against which a further getattr call is made with the third component of the name, and so on. If at any point a getattr fails then the interpreter raises an AttributeError exception with a message such as "'module' object has no attribute 'name'."

Understanding Expression ValuesOnce you understand the mechanisms for looking up the values of names it becomes a little easier to understand how Python computes expression values. Once a name is resolved there may be other methods to apply such as __getitem__ for subscripting or __call__ for function calls. These operations also yield values, whose namespaces can again be used to lookup further names. So, for example, when you see an expression like
    e.orig.args[0].startswith('UNIQUE constraint failed')
you understand that the name e.orig.args is looked up by going through a sequence of namespaces and evaluates to a list object, to which a subscripting operation is applied to get the first element, in whose namespace the name startswith is resolved (hopefully to something callable) to a value that is finally called with a string argument.
Ultimately, by decomposing the expressions in this way you end up only dealing with one object at a time. Knowing how these mechanisms work in principle can help you to decipher complex Python code.
* Just type import this into a Python interpreter, or enter python -m this at the shell prompt, and hit return.

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-18

Planet Apache - Wed, 2017-10-18 19:58
Categories: FLOSS Project Planets

Marcos Dione: dinant-0.5

Planet Python - Wed, 2017-10-18 16:51

I have a love and hate relantionship with regular expressions (regexps). On one side they're a very powerful tool for text processing, but on the other side of the coin, the most well known implementation is a language whose syntax is so dense, it's hard to read beyond the most basic phrases. This clashes with my intention of trying to make programs as readable as possible[1]. It's true that you can add comments and make your regexps span several lines so you can digest them more slowly, but to me it feels like eating dried up soup by the teaspoon directly from the package without adding hot water.

So I started reading regexps aloud and writing down how I describe them in natural language. This way, [a-z]+ becomes one or more of any of the letters between lowercase a and lowercase z, but of course this is way too verbose.

Then I picked up these descriptions and tried to come up with a series of names (in the Pyhton sense) that could be combined to build the same regexps. Even 'literate' programs are not really plain English, but a more condensed version, while still readable. Otherwise you end up with Perl, and not many think that's a good idea. So, that regexp becomes one_or_more(any_of('a-z')). As you can see, some regexp language can still be recognizable, but it's the lesser part.

So, dinant was born. It's a single source file module that implements that language and some other variants (any_of(['a-z'], times=[1, ]), etc). It also implements some prebuilt regexps for common constructs, like integer, a datetime() function that accepts strptime() patterns or more complex things like IPv4 or IP_port. Conforming I start using it in (more) real world examples (or issues are filed!), the language will slowly grow.

Almost accidentally, its constructive form brought along a nice feature: you can debug() your expression so you can find out the first sub expression that fails matching:

# this is a real world example! In [1]: import dinant as d In [2]: line = &apos&apos&apos36569.12ms (cpu 35251.71ms)\n&apos&apos&apos # can you spot the error? In [3]: render_time_re = ( d.bol + d.capture(d.float, name=&aposwall_time&apos) + &aposms &apos + ...: &apos(cpu&apos + d.capture(d.float, name=&aposcpu_time&apos) + &aposms)&apos + d.eol ) In [4]: print(render_time_re.match(line)) None In [5]: print(render_time_re.debug(line)) # ok, this is too verbose (I hope next version will be more human readable) # but it&aposs clear it&aposs the second capture Out[5]: &apos^(?P<wall_time>(?:(?:\\-)?(?:(?:\\d)+)?\\.(?:\\d)+|(?:\\-)?(?:\\d)+\\.|(?:\\-)?(?:\\d)+))ms\\ \\(cpu(?P<cpu_time>(?:(?:\\-)?(?:(?:\\d)+)?\\.(?:\\d)+|(?:\\-)?(?:\\d)+\\.|(?:\\-)?(?:\\d)+))&apos # the error is that the text &apos(cpu&apos needs a space at the end

Of course, the project is quite simple, so there is no regexp optimizer, which means that the resulting regexpes are less readable than the ones you would had written by hand. The idea is that, besides debugging, you will never have to see them again.

Two features are in the backburner, and both are related. One is to make debugging easier by simply returning a representation of the original expression instead of the internal regexp used. That means, in the previous example, something like:

bol + capture(float, name=&aposwall_time&apos) + &aposms &apos + &apos(cpu&apos + capture(float, name=&aposcpu_time&apos)

The second is that you can tell which types the different captured groups must convert to. This way, capture(float) would not return the string representing the float, but the actual float. The same for datetime() and others.

As the time of writing the project only lives on GitHub, but it will also be available in PyPI Any Time Soon®. Go grab it!

python ayrton

[1] for someone that knows how to read English, that is.

Categories: FLOSS Project Planets

بايثون العربي: هيكل تطبيقات PyQT

Planet Python - Wed, 2017-10-18 16:38

في المرة الماضية تطرقنا إلى أساسيات PyQt4 وفي هذا الدرس سنتطرق إلى هيكل تطبيقات PyQt، بماأننا نستطيع برمجة نافدة من خلال بضعة سطور ومن دون إستخدام برمجة كائنية التوجُّه إلا أننا سنواجه بعض التحديات في طريقنا نحو جعل تطبيقاتنا تعمل بشكل جيد .

ماسنقوم به في هذا الدرس هو التركيز حول وضع الأساس لنمو التطبيق الخاص بنا بإستخدام البرمجة كائنية التوجُّه OOP، إذا كنت على غير دراية ب البرمجة كائنية التوجُّه فلا داعي للقلق فلن نقوم بالتعمق فيها بل مجرد بعض المفاهيم الأساسية وكل ماأطلبه منك هو بعض التركيز حتى تصل إلى ماهو مطلوب من هذا الدرس.

تبقى الأساسيات هي نفسها، نحن بحاجة إلى تعريف التطبيق، تعريف واجهة المستخدم الرسومية، ونحن دائما بحاجة إلى القيام ب show() لجلب النافذة للمستخدم وعادة ستبدوا كمايلي :

app = QtGui.QApplication(sys.argv) تعريف التطبيق # GUI = Window() تعريف واجهة المستخدم الرسومية show  سيتم إخفاءها هنا # sys.exit(app.exec_()) هذا السطر من أجل تأمين الخروج بشكل لائق #

في الكود السابق الشيء الوحيد الذي لم نقم به بعد هو كائن النافذة وهو ماسنقوم به الأن وهو إنشاء كائن النافذة الذي سيقوم بإستدعاء واجهة المستخدم الرسومية . إنشاء كائن التافذة 

class Window(QtGui.QMainWindow): def __init__(self): super(Window, self).__init__() self.setGeometry(50, 50, 500, 300) self.setWindowTitle("PyQT tuts!") self.setWindowIcon(QtGui.QIcon('pythonlogo.png'))

أولا قمنا بإنشاء كائن النافذة يرث خصائص ومميزات  QtGui.QMainWindow

class Window(QtGui.QMainWindow):

بعد ذلك قمنا بإنشاء طريقة init

def __init__(self):

في السطر الموالي الهدف منه إرجاع قيمة ومهمة الكائن الأصلي

super(Window, self).__init__()

ثم قمنا بتعيين إحداثيات وحجم النافذة بالإضافة إلى عنوان الأخيرة

self.setGeometry(50, 50, 500, 300) self.setWindowTitle("PyQT tuts!")

ثم قمنا بتعيين إيقونة للنافذة عبر السطر التالي :

self.setWindowIcon(QtGui.QIcon('pythonlogo.png'))

وأخيرا قمنا بإستدعاء الواجهة الرسومية للمستخدم

self.show()

السبب الذي نريد من وراءه إرث من كائن QtGui.QMainWindow هو أننا نريد إستعمال جميع خصائص ومميزات الواجهة الرسومية التي يقدمها لنا  QT  ومع هذا مازلنا نريد إنشاء كائن خاص بنا من أجل التخصيص .

وإستخدمنا super من أجل إستخدام خصائص الكائن الأصلي على هذه النافذة وكأنها كائن QT.

أما باقي الكود فقد سبق لنا أون تطرقنا له من قبل .

لاحظ الأن أنه من أجل الإشارة إلى جميع الجوانب إستخدمنا self وهي تشير إلى الكائن الحالي،والذي هو بالأساس كائن QT وهذا يعني بواسطة self يمكننا الإشارة إلى جميع الطرق التي قمنا بإرثها من QtGui.QMainWindow بالإضافة إلى الطرق التي نكتبها بأنفسنا في هذا الكائن.

 

إلى هنا نكون قد إنتهينا وإليك الكود الكامل لهذا الدرس :

import sys from PyQt4 import QtGui class Window(QtGui.QMainWindow): def __init__(self): super(Window, self).__init__() self.setGeometry(50, 50, 500, 300) self.setWindowTitle("PyQT tuts!") self.setWindowIcon(QtGui.QIcon('pythonlogo.png')) self.show() app = QtGui.QApplication(sys.argv) GUI = Window() sys.exit(app.exec_())

في المرة القادمة سنتطرق إلى كيفية إستخدام الأزرار .

وإذا فاتك الدرس الأول يمكن الإطلاع عليه من هنا

Categories: FLOSS Project Planets

Joey Hess: extending Scuttlebutt with Annah

Planet Debian - Wed, 2017-10-18 15:31

This post has it all. Flotillas of sailboats, peer-to-peer wikis, games, and de-frogging. But, I need to start by talking about some tech you may not have heard of yet...

  • Scuttlebutt is way for friends to share feeds of content-addressed messages, peer-to-peer. Most Scuttlebutt clients currently look something like facebook, but there are also github clones, chess games, etc. Many private encrypted conversations going on. All entirely decentralized.
    (My scuttlebutt feed can be viewed here)

  • Annah is a purely functional, strongly typed language. Its design allows individual atoms of the language to be put in content-addressed storage, right down to data types. So the value True and a hash of the definition of what True is can both be treated the same by Annah's compiler.
    (Not to be confused with my sister, Anna, or part of the Debian Installer with the same name that I wrote long ago.)

So, how could these be combined together, and what might the result look like?

Well, I could start by posting a Scuttlebutt message that defines what True is. And another Scuttlebutt message defining False. And then, another Scuttlebutt message to define the AND function, which would link to my messages for True and False. Continue this until I've built up enough Annah code to write some almost useful programs.

Annah can't do any IO on its own (though it can model IO similarly to how Haskell does), so for programs to be actually useful, there needs to be Scuttlebutt client support. The way typing works in Annah, a program's type can be expressed as a Scuttlebutt link. So a Scuttlebutt client that wants to run Annah programs of a particular type can pick out programs that link to that type, and will know what type of data the program consumes and produces.

Here are a few ideas of what could be built, with fairly simple client-side support for different types of Annah programs...

  • Shared dashboards. Boats in a flotilla are communicating via Scuttlebutt, and want to share a map of their planned courses. Coders collaborating via Scuttlebutt want to see an overview of the state of their project.

    For this, the Scuttlebutt client needs a way to run a selected Annah program of type Dashboard, and display its output like a Scuttlebutt message, in a dashboard window. The dashboard message gets updated whenever other Scuttlebutt messages come in. The Annah program picks out the messages it's interested in, and generates the dashboard message.

    So, send a message updating your boat's position, and everyone sees it update on the map. Send a message with updated weather forecasts as they're received, and everyone can see the storm developing. Send another message updating a waypoint to avoid the storm, and steady as you go...

    The coders, meanwhile, probably tweak their dashboard's code every day. As they add git-ssb repos, they make the dashboard display an overview of their bugs. They get CI systems hooked in and feeding messages to Scuttlebutt, and make the dashboard go green or red. They make the dashboard A-B test itself to pick the right shade of red. And so on...

    The dashboard program is stored in Scuttlebutt so everyone is on the same page, and the most recent version of it posted by a team member gets used. (Just have the old version of the program notice when there's a newer version, and run that one..)

    (Also could be used in disaster response scenarios, where the data and visualization tools get built up on the fly in response to local needs, and are shared peer-to-peer in areas without internet.)

  • Smart hyperlinks. When a hyperlink in a Scuttlebutt message points to a Annah program, optionally with some Annah data, clicking on it can run the program and display the messages that the program generates.

    This is the most basic way a Scuttlebutt client could support Annah programs, and it could be used for tons of stuff. A few examples:

    • Hiding spoilers. Click on the link and it'll display a spoiler about a book/movie.
    • A link to whatever I was talking about one year ago today. That opens different messages as time goes by. Put it in your Scuttlebutt profile or something. (Requires a way for Annah to get the current date, which it normally has no way of accessing.)
    • Choose your own adventure or twine style games. Click on the link and the program starts the game, displaying links to choose between, and so on.
    • Links to custom views. For example, a link could lead to a combination of messages from several different, related channels. Or could filter messages in some way.
  • Collaborative filtering. Suppose I don't want to see frog-related memes in my Scuttlebutt client. I can write a Annah program that calculates a message's frogginess, and outputs a Filtered Message. It can leave a message unchanged, or filter it out, or perhaps minimize its display. I publish the Annah program on my feed, and tell my Scuttlebutt client to filter all messages through it before displaying them to me.

    I published the program in my Scuttlebutt feed, and so my friends can use it too. They can build other filtering functions for other stuff (such an an excess of orange in photos), and integrate my frog filter into their filter program by simply composing the two.

    If I like their filter, I can switch my client to using it. Or not. Filtering is thus subjective, like Scuttlebutt, and the subjectivity is expressed by picking the filter you want to use, or developing a better one.

  • Wiki pages. Scuttlebutt is built on immutable append-only logs; it doesn't have editable wiki pages. But they can be built on top using Annah.

    A smart link to a wiki page is a reference to the Annah program that renders it. Of course being a wiki, there will be more smart links on the wiki page going to other wiki pages, and so on.

    The wiki page includes a smart link to edit it. The editor needs basic form support in the Scuttlebutt client; when the edited wiki page is posted, the Annah program diffs it against the previous version and generates an Edit which gets posted to the user's feed. Rendering the page is just a matter of finding the Edit messages for it from people who are allowed to edit it, and combining them.

    Anyone can fork a wiki page by posting an Edit to their feed. And can then post a smart link to their fork of the page.

    And anyone can merge other forks into their wiki page (this posts a control message that makes the Annah program implementing the wiki accept those forks' Edit messages). Or grant other users permission to edit the wiki page (another control message). Or grant other users permissions to grant other users permissions.

    There are lots of different ways you might want your wiki to work. No one wiki implementation, but lots of Annah programs. Others can interact with your wiki using the program you picked, or fork it and even switch the program used. Subjectivity again.

  • User-defined board games. The Scuttlebutt client finds Scuttlebutt messages containing Annah programs of type Game, and generates a tab with a list of available games.

    The players of a particular game all experience the same game interface, because the code for it is part of their shared Scuttlebutt message pool, and the code to use gets agreed on at the start of a game.

    To play a game, the Scuttlebutt client runs the Annah program, which generates a description of the current contents of the game board.

    So, for chess, use Annah to define a ChessMove data type, and the Annah program takes the feeds of the two players, looks for messages containing a ChessMove, and builds up a description of the chess board.

    As well as the pieces on the game board, the game board description includes Annah functions that get called when the user moves a game piece. That generates a new ChessMove which gets recorded in the user's Scuttlebutt feed.

    This could support a wide variety of board games. If you don't mind the possibility that your opponent might cheat by peeking at the random seed, even games involving things like random card shuffles and dice rolls could be built. Also there can be games like Core Wars where the gamers themselves write Annah programs to run inside the game.

    Variants of games can be developed by modifying and reusing game programs. For example, timed chess is just the chess program with an added check on move time, and time clock display.

  • Decentralized chat bots. Chat bots are all the rage (or were a few months ago, tech fads move fast), but in a decentralized system like Scuttlebutt, a bot running on a server somewhere would be a ugly point of centralization. Instead, write a Annah program for the bot.

    To launch the bot, publish a message in your own personal Scuttlebutt feed that contains the bot's program, and a nonce.

    The user's Scuttlebutt client takes care of the rest. It looks for messages with bot programs, and runs the bot's program. This generates or updates a Scuttlebutt message feed for the bot.

    The bot's program signs the messages in its feed using a private key that's generated by combining the user's public key, and the bot's nonce. So, the bot has one feed per user it talks to, with deterministic content, which avoids a problem with forking a Scuttlebutt feed.

    The bot-generated messages can be stored in the Scuttlebutt database like any other messages and replicated around. The bot appears as if it were a Scuttlebutt user. But you can have conversations with it while you're offline.

    (The careful reader may have noticed that deeply private messages sent to the bot can be decrypted by anyone! This bot thing is probably a bad idea really, but maybe the bot fad is over anyway. We can only hope. It's important that there be at least one bad idea in this list..)

This kind of extensibility in a peer-to-peer system is exciting! With these new systems, we can consider lessons from the world wide web and replicate some of the good parts, while avoiding the bad. Javascript has been both good and bad for the web. The extensibility is great, and yet it's a neverending security and privacy nightmare, and it ties web pages ever more tightly to programs hidden away on servers. I believe that Annah combined with Scuttlebutt will comprehensively avoid those problems. Shall we build it?

This exploration was sponsored by Jake Vosloo on Patreon.

Categories: FLOSS Project Planets

Codementor: How I used Python to find interesting people to follow on Medium

Planet Python - Wed, 2017-10-18 15:16
Medium has a large amount of content, a large number of users, and an almost overwhelming number of posts. When you try to find interesting users to interact with, you’re flooded with visual noise.
Categories: FLOSS Project Planets
Syndicate content