Feeds

Ruslan Spivak: Let’s Build A Simple Interpreter. Part 12.

Planet Python - Thu, 2016-12-01 16:20

“Be not afraid of going slowly; be afraid only of standing still.” - Chinese proverb.

Hello, and welcome back!

Today we are going to take a few more baby steps and learn how to parse Pascal procedure declarations.

What is a procedure declaration? A procedure declaration is a language construct that defines an identifier (a procedure name) and associates it with a block of Pascal code.

Before we dive in, a few words about Pascal procedures and their declarations:

  • Pascal procedures don’t have return statements. They exit when they reach the end of their corresponding block.
  • Pascal procedures can be nested within each other.
  • For simplicity reasons, procedure declarations in this article won’t have any formal parameters. But, don’t worry, we’ll cover that later in the series.

This is our test program for today:

PROGRAM Part12; VAR a : INTEGER; PROCEDURE P1; VAR a : REAL; k : INTEGER; PROCEDURE P2; VAR a, z : INTEGER; BEGIN {P2} z := 777; END; {P2} BEGIN {P1} END; {P1} BEGIN {Part12} a := 10; END. {Part12}

As you can see above, we have defined two procedures (P1 and P2) and P2 is nested within P1. In the code above, I used comments with a procedure’s name to clearly indicate where the body of every procedure begins and where it ends.

Our objective for today is pretty clear: learn how to parse a code like that.


First, we need to make some changes to our grammar to add procedure declarations. Well, let’s just do that!

Here is the updated declarations grammar rule:

The procedure declaration sub-rule consists of the reserved keyword PROCEDURE followed by an identifier (a procedure name), followed by a semicolon, which in turn is followed by a block rule, which is terminated by a semicolon. Whoa! This is a case where I think the picture is actually worth however many words I just put in the previous sentence! :)

Here is the updated syntax diagram for the declarations rule:

From the grammar and the diagram above you can see that you can have as many procedure declarations on the same level as you want. For example, in the code snippet below we define two procedure declarations, P1 and P1A, on the same level:

PROGRAM Test; VAR a : INTEGER; PROCEDURE P1; BEGIN {P1} END; {P1} PROCEDURE P1A; BEGIN {P1A} END; {P1A} BEGIN {Test} a := 10; END. {Test}

The diagram and the grammar rule above also indicate that procedure declarations can be nested because the procedure declaration sub-rule references the block rule which contains the declarations rule, which in turn contains the procedure declaration sub-rule. As a reminder, here is the syntax diagram and the grammar for the block rule from Part10:


Okay, now let’s focus on the interpreter components that need to be updated to support procedure declarations:

Updating the Lexer

All we need to do is add a new token named PROCEDURE:

PROCEDURE = 'PROCEDURE'

And add ‘PROCEDURE’ to the reserved keywords. Here is the complete mapping of reserved keywords to tokens:

RESERVED_KEYWORDS = { 'PROGRAM': Token('PROGRAM', 'PROGRAM'), 'VAR': Token('VAR', 'VAR'), 'DIV': Token('INTEGER_DIV', 'DIV'), 'INTEGER': Token('INTEGER', 'INTEGER'), 'REAL': Token('REAL', 'REAL'), 'BEGIN': Token('BEGIN', 'BEGIN'), 'END': Token('END', 'END'), 'PROCEDURE': Token('PROCEDURE', 'PROCEDURE'), }


Updating the Parser

Here is a summary of the parser changes:

  1. New ProcedureDecl AST node
  2. Update to the parser’s declarations method to support procedure declarations

Let’s go over the changes.

  1. The ProcedureDecl AST node represents a procedure declaration. The class constructor takes as parameters the name of the procedure and the AST node of the block of code that the procedure’s name refers to.

    class ProcedureDecl(AST): def __init__(self, proc_name, block_node): self.proc_name = proc_name self.block_node = block_node
  2. Here is the updated declarations method of the Parser class

    def declarations(self): """declarations : VAR (variable_declaration SEMI)+ | (PROCEDURE ID SEMI block SEMI)* | empty """ declarations = [] if self.current_token.type == VAR: self.eat(VAR) while self.current_token.type == ID: var_decl = self.variable_declaration() declarations.extend(var_decl) self.eat(SEMI) while self.current_token.type == PROCEDURE: self.eat(PROCEDURE) proc_name = self.current_token.value self.eat(ID) self.eat(SEMI) block_node = self.block() proc_decl = ProcedureDecl(proc_name, block_node) declarations.append(proc_decl) self.eat(SEMI) return declarations

    Hopefully, the code above is pretty self-explanatory. It follows the grammar/syntax diagram for procedure declarations that you’ve seen earlier in the article.


Updating the SymbolTable builder

Because we’re not ready yet to handle nested procedure scopes, we’ll simply add an empty visit_ProcedureDecl method to the SymbolTreeBuilder AST visitor class. We’ll fill it out in the next article.

def visit_ProcedureDecl(self, node): pass

Updating the Interpreter

We also need to add an empty visit_ProcedureDecl method to the Interpreter class, which will cause our interpreter to silently ignore all our procedure declarations.

So far, so good.

Now that we’ve made all the necessary changes, let’s see what the Abstract Syntax Tree looks like with the new ProcedureDecl nodes.

Here is our Pascal program again (you can download it directly from GitHub):

PROGRAM Part12; VAR a : INTEGER; PROCEDURE P1; VAR a : REAL; k : INTEGER; PROCEDURE P2; VAR a, z : INTEGER; BEGIN {P2} z := 777; END; {P2} BEGIN {P1} END; {P1} BEGIN {Part12} a := 10; END. {Part12}


Let’s generate an AST and visualize it with the genastdot.py utility:

$ python genastdot.py part12.pas > ast.dot && dot -Tpng -o ast.png ast.dot

In the picture above you can see two ProcedureDecl nodes: ProcDecl:P1 and ProcDecl:P2 that correspond to procedures P1 and P2. Mission accomplished. :)

As a last item for today, let’s quickly check that our updated interpreter works as before when a Pascal program has procedure declarations in it. Download the interpreter and the test program if you haven’t done so yet, and run it on the command line. Your output should look similar to this:

$ python spi.py part12.pas Define: INTEGER Define: REAL Lookup: INTEGER Define: <a:INTEGER> Lookup: a Symbol Table contents: Symbols: [INTEGER, REAL, <a:INTEGER>] Run-time GLOBAL_MEMORY contents: a = 10

Okay, with all that knowledge and experience under our belt, we’re ready to tackle the topic of nested scopes that we need to understand in order to be able to analyze nested procedures and prepare ourselves to handle procedure and function calls. And that’s exactly what we are going to do in the next article: dive deep into nested scopes. So don’t forget to bring your swimming gear next time! Stay tuned and see you soon!

Categories: FLOSS Project Planets

KDevelop 5.0.3 released

Planet KDE - Thu, 2016-12-01 16:00

KDevelop 5.0.3 released

Today, we are happy to announce the release of KDevelop 5.0.3, the third bugfix and stabilization release for KDevelop 5.0. An upgrade to 5.0.3 is strongly recommended to all users of 5.0.0, 5.0.1 or 5.0.2.

Together with the source code, we again provide a prebuilt one-file-executable for 64-bit Linux, as well as binary installers for 32- and 64-bit Microsoft Windows. You can find them on our download page.

List of notable fixes and improvements since version 5.0.2:

  • Fix a performance issue which would lead to the UI becoming unresponsive when lots of parse jobs were created (BUG: 369374)
  • Fix some behaviour quirks in the documentation view
  • Fix a possible crash on exit (BUG: 369374)
  • Fix tab order in problems view
  • Make the "Forward declare" problem solution assistant only pop up when it makes sense
  • Fix GitHub handling authentication (BUG: 372144)
  • Fix Qt help jumping to the wrong function sometimes
  • Windows: Fix MSVC startup script from not working in some environments
  • kdev-python: fix some small issues in the standard library info

The 5.0.3 source code and signatures can be downloaded from here.

sbrauch Thu, 12/01/2016 - 22:00 Category News Tags release windows linux KDevelop 5 Comments Permalink

Hi,

FYI, the download page still (12/02/2016 morning in Paris) points to the 5.0.2 AppImage. The right link is http://download.kde.org/stable/kdevelop/5.0.3/bin/linux/KDevelop-5.0.3-… .

Gaël

Permalink

In reply to by Gaël (not verified)

Fixed now, thanks!

Permalink

Updated, thank you!

Categories: FLOSS Project Planets

php[architect]: December 2016 – Scrutinizing Your Tests

Planet Drupal - Thu, 2016-12-01 13:58

The twelfth issue of 2016 is now available! This month we look at how to write good tests with Behat and using Test Driven Development. This issue also includes articles on using HTTPlug to decouple your HTTP Client, Decoupled Blocks with Drupal and JavaScript. Our columnists have articles on writing a Chat bot, advice on securing your application’s secrets, making better bug reports, respecting diversity, and a look back at 2016.

Download your issue and read a FREE article today.

Categories: FLOSS Project Planets

Graham Dumpleton: What USER should you use to run Docker images.

Planet Python - Thu, 2016-12-01 11:55
If you follow this blog and my rants on Twitter you will know that I often complain about the prevalence of Docker-formatted container images that will only work if run as the root user, even though there is no technical reason to run them as root. With more and more organisations moving towards containers and using these images in production, some at least are realising that running them as root
Categories: FLOSS Project Planets

Friday New Beginnings Directory IRC meetup: December 2nd starting at 1 p.m. EST/18:00 UTC

FSF Blogs - Thu, 2016-12-01 11:49

Participate in supporting the FSD by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the FSD contains a wealth of useful information, from basic category and descriptions, dto providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the FSD has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

This week's theme is new beginnings. With new volunteers joining us from last weeks meeting, and the fact that we haven't had a meeting focused on adding new entries to the directory, it's time to focus a bit on the new. We'll of course be discussing ongoing projects as well, but this week we want to make sure the directory keeps growing even as we do the work on the infrastructure needed to make it better.

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly FSD Meetings pages that everyone is welcome to contribute to before, during, and after each meeting.

Categories: FLOSS Project Planets

Anwesha Das: micro:bit : a round around the Sun

Planet Python - Thu, 2016-12-01 10:58

"The future of the world is in my classroom today"- Ivan W. Fitzwater. To shape the world we need to give the children the correct environment to learn new things. BBC micro:bit is such a project. It has been a year since micro:bit was launched in UK. This tiny device was launched with an aim to train young minds to think, create and of course code.

What is this fun-size piece?

micro:bit is an ARM powered embedded board. This cookie sized computer (4 × 5 cm) has two input buttons, 5*5 led lights, two microcontroller processors, with one allowing the device to work as a USB on a personal computer irrespective of the operating system . The bigger chip in the upper left of the device is the ARM Cortex- M0 processor with a 416 KiB of memory. For performing Physics experiments it has a digital compass, accelerometer. USB or external battery backup can be used for powering micro:bit. Other features like Bluetooth connectivity makes the device so cool.

From Python to MicroPython

MicroPython is Python for microcontrollers, other small sized and confined hardware. It is an implementation of Python 3, where it behaves like Python but does not use the CPython source code. It is written from the scratch. MicroPython comes with a useful subset of the Python standard library, and is published under the permissive MIT license.

Damien George : the man behind MicroPython

In 2013 Damien George started his project of shrinking the language Python. As it could run on small devices. He started MicroPython as a Kickstarter project. It took him almost 6 months of time to realize that it is a workable idea. Only after he had written Python compiler as thin as it could be compressed into a RAM of 128 kilobytes. This "Genuinely Nice Chap" was awarded the PSF's Community
Service Award
in March 2016.

The Era of Digital creativity:

“I never teach my pupils, I only attempt to provide the conditions in which they can learn.” Albert Einstein. For a long time, the BBC has been endeavoring to provide fyouth with good conditions to help
them learn, especially around technology. They started this mission with the launch of the BBC Micro, as series of microcomputers in the 1980's. The latest effort in that mission was the micro:bit project. The BBC started with the massive aspiration of giving these pocket size fun machines, micro:bit, to a million 11 and 12 year old children in the UK. An amazing aim indeed, motivating a generation to think,to visualize and to code. Children will be able to make their fantasies come true on this blank slate.

PSF joining the mission:

BBC found that Python is the good choice for micro:bit. Python is well known as a relatively easy language to learn, and is especially popular for
teaching children. The PSF is one of the key partners in this project, as the micro:bit is not only a fascinating education use case, but also illustrates Python's utility in an area not normally associated with
the language, embedded systems programming.
Whenever we think of an embedded device the ideas like 'low level programming', 'coding in the language C ' just pop in our minds. But that makes things difficult, needs expertise and is of course not very suitable for 11 or 12 year olds. Here MicroPython came to help. With MicroPython people can interact with the hardware in a lucid manner.

Nicholas Tollervey, our very own ntoll is the guy who was acting as the bridge between PSF and the project. It is majorly for his efforts, now MicroPython is running on micro:bit (followed by his much jumping and shouting of 'woo hoo'). The following is what the father of MU code editor, wants to say about his journey with micorbit.

What made you interested in the project?

I was relatively well known in UK Python circles as being interested in education (I created and organize the PyCon UK education track for
teachers and kids). A friend heard about the BBC's request for partners
in a programming-in-education project called "Make it Digital" and
suggested I take a look. The BBC's request for partners actually
mentioned Python! That was at the end of 2014.

When did you get involved with it?

I took the BBC's request for partners and, with the permission of the PSF board, put together a proposal for the PSF to
become a partner.

At this stage, all I knew was the project included programming and a mention of Python. Given this mention and Python's popularity as a
teaching language I felt it important that the Python community had the
opportunity to step up and get involved.

In January 2015 the BBC invited the PSF to join a partnership relating to the mysterious "Make it Digital" project. We had to sign NDA
agreements and it was only then that I learned of the plans for the BBC
micro:bit device.

How does this project helped/changed education system in UK?

Every 11-12 year old in the UK should have one in their possession by now. A huge number of resources have been made available, for free, for
teachers and learners to learn about programming. Lots of the partner
organisations have become involved in delivery of educational resources.

From a Python specific perspective, between 80-100 of the UK's Python developers have been involved in either writing code for kids, creating
tools for kids to use, turning up at teach-meets to work with teachers
and building cool educational projects for people to use in the classroom.

It's too early to tell what the impact has been. However, in ten years time I'll know it's been a success if I interview a new graduate
programmer and they say they got started with the micro:bit. :-)

How did PSF got involved with this project?

The partnership was between organisations. A rag-tag band of volunteers from the UK's Python community was not an option - ergo, my acting as a
PSF Fellow on behalf of the PSF.

This was actually quite useful since a large number of the volunteers got involved because they would be acting under the auspices of the PSF.
It's an obvious way to motivate people into giving something back to the
community - run it as a PSF project.

What was PSF's role/ how did PSF helped the project?

The original role of the PSF was to create educational resources, offer Python expertise and provide access to events such as PyCon UK through
which teachers could be reached. The BBC explained that another partner
was actually building the Python software for the device.

The complete story is told in this blog post:

http://ntoll.org/article/story-micropython-on-microbit

What are the changes that the project had brought?

Well, I believe it has brought the MicroPython project to the attention of many people in the education world and wider Python community. I also
believe it has brought educational efforts to the attention of programmers.

Education is important. It's how we decide what our community is to become through our interaction with our future colleagues, friends and
supporters.

micro:bit in PyCon UK 2016

This year's Pycon UK took place, from 15th to 19th September, 2016, in Cardiff. PSF as a part of its prime mission "to promote the programming language Python" sponsored Python conferences around the globe. "They are very generous sponsors", was what ntoll had to say about the
role of the PSF in PyCon UK.

To teach the students one has to educate the teachers. A lot of preparatory work had been done before actually distributing the micro:bits including the changes made in the CS curriculum of the students. About his plan in Pycon UK Ntoll ran several workshops for teachers and another for kids as part of
the education track. They had distributed these fun machines (almost 400) to the attendees.

Presently there is a lot of micro-fun and micro-love going on in PyCon UK

Present status of micro:bit

One of the major and primary reason for PSF joining the mission was that the project is open source. Both the software and hardware designs have released under open licenses. Now as the designs are open and available for the mass, anyone and everyone can remake these fun pieces. Microbit Foundation has been created and PSF is a part of it

Renaissance in making : join the movement

micro:bit has thousands possibilities hidden in it. People are exploring them, drawing their dreams with micro:bit. The following are some cool projects done with microbit:

A keyboard constructed just by affixing a simple buzzer to the micro:bit.

Our childhood [game of snakes] (https://twitter.com/HolyheadCompSci/status/750242493996957696) recreated with the help of micro:bit, MicroPython.

An effective clap controlled robot built with the help of micro:bit, MicroPython .

"Knowledge is power". Nowadays, heroes don't come with a sword, but with a micro:bit in their hands. So, if you want to learn, have fun and be a part of the mission grab your own micro:bit and start coding.

Categories: FLOSS Project Planets

CiviCRM Blog: The quest for performance improvements - 2nd sprint

Planet Drupal - Thu, 2016-12-01 10:56

Three weeks ago I wrote about our quest for performance at the Socialist party. This week we had a follow up sprint and I want to thank you for all the comments on that blog.

During this sprint we have been looking into the direction of the amount of groups (+/- 2.700) and whether the amount of groups slowed down the system. We developed a script for deleting a set of groups from all database tables and we deleted around 2.400 groups from the system and we saw that this had an positive impact on the performance.

Before deleting the groups adding a new group took around 14 seconds. After removing 2.400 groups, adding a new group took around 3 seconds. So that gave us a direction in which we could look for a solution.

We also looked what would happened when we delete all contacts who have not a membership from the database and that also had a positive impact but not as huge as the reducing the amount of groups. The reason we looked into this is that around 200.000 contacts in the system are not members but sympathizers for a specific campaign.

We also had one experienced database guy (who mainly knows Postgres) looking into database tuning; at the moment we don't know what the outcome is of his inspection.

From what we have discover by reducing the groups we have two paths to follow:

  1. Actually reducing the amount of groups in the system
  2. Developing an extension which does functional the same thing as groups but with a better structure underneath and developed with preformance in mind. (no civicrm_group_contact_cache; no need for nesting with multiple parents; no need for smart groups).

Both paths are going to be discussed at the socialist party and in two weeks we have another sprint in which we hope to continue the performance improvements.

 

 

Drupal
Categories: FLOSS Project Planets

Lintel Technologies: How to upload a file in Zoho using python?

Planet Python - Thu, 2016-12-01 10:21
UploadFile API Method of Zoho CRM Table of Contents
  • Purpose
  • Request URL
  • Request Parameters
  • Python Code to Upload a file to a record
  • Sample Response
Purpose

You can use this method to attach files to records.

Request URL

XML Format:
https://crm.zoho.com/crm/private/xml/Leads/uploadFile?authtoken=Auth Token&scope=crmapi&id=Record Id&content=File Input Stream

JSON Format:
https://crm.zoho.com/crm/private/json/Leads/uploadFile?authtoken=Auth Token&scope=crmapi&id=Record Id&content=File Input Stream

Request Parameters Parameter Data Type Description authtoken* String Encrypted alphanumeric string to authenticate your Zoho credentials. scope* String Specify the value as crmapi id* String Specify unique ID of the “record” or “note” to which the file has to be attached. content FileInputStream Pass the File Input Stream of the file attachmentUrl String Attach a URL to a record.

* – Mandatory parameter

Important Note:

  • The total file size should not exceed 20 MB.
  • Your program can request only up to 60 uploadFile calls per min. If API User requests more than 60 calls, system will block the API access for 5 min.
  • If the size exceeds 20 MB, you will receive the following error message: “File size should not exceed 20 MB“. This limit does not apply to URLs attached via attachmentUrl.
  • The attached file will be available under the Attachments section in the Record Details Page.
  • Files can be attached to records in all modules except Reports, Dashboards and Forecasts.
  • In the case of the parameter attachmentUrl, content is not required as the attachment is from a URL.
    Example for attachmentUrl: crm/private/xml/Leads/uploadFile?authtoken=*****&scope=crmapi&id=<entity_id>&attachmentUrl=<insert_ URL>
Python Code to Upload a file to a record

Here’s a simple script that you can use to upload a file in zoho using python.

Go to https://pypi.python.org/pypi/MultipartPostHandler2/0.1.5 and get the egg file and install it.

In the program, you need to specify values for the following:
  • Your Auth Token
  • The ID of the Record
  • The uploadFile Request URL in the format mentioned above
  • The File Path i.e the location of the File

import MultipartPostHandler, urllib2 ID ='put the accounts zoho id here ' authtoken = 'your zoho authtoken here' fileName = "your file name here - i use the full path and filename" opener = urllib2.build_opener(MultipartPostHandler.MultipartPostHandler) params = {'authtoken':authtoken,'scope':'crmapi','newFormat':'1','id':ID, 'content':open(fileName,"rb")} final_URL = "https://crm.zoho.com/crm/private/json/Accounts/uploadFile" rslts = opener.open(final_URL, params) print rslts.read()

Sample Response

{ "response":{ "result":{ "recorddetail":{ "FL":[ { "val":"Id", "content":"2211247000000120001" }, { "val":"Created Time", "content":"2016-11-19 14:54:07" }, { "val":"Modified Time", "content":"2016-11-19 14:54:07" }, { "val":"Created By", "content":"mayur" }, { "val":"Modified By", "content":"mayur" } ] }, "message":"File has been attached successfully" }, "uri":"/crm/private/json/Accounts/uploadFile" } }

 

The post How to upload a file in Zoho using python? appeared first on Lintel Technologies Blog.

Categories: FLOSS Project Planets

Import Python: ImportPython Issue 101 - Python Quiz Results, Deployment, Code linting, Memory management and more

Planet Python - Thu, 2016-12-01 09:37
Worthy Read
Quiz Results Thanks everyone for participating in the quiz. Nico Ekkart, Chad Heyne, Artem Bezukladichnii, Andrew Nester and Kyle Monson Congrats. Your copies of Writing Idiomatic Python is on its way. The Answers are on the blog post. ImportPython Subscribers can get a copy of Writing Idiomatic Python for a special price at https://jeffknupp.com/writing-idiomatic-python-ebook-importpython-q2vwt5/ . Thank you Jeff.
quiz
How We Deploy Python Code ? - Nylas Building, packaging, and deploying Python using versioned artifacts in Debian packages. At Nylas, we’ve developed a better way to deploy Python code along with its dependencies, resulting in lightweight packages that can be easily installed, upgraded, or removed. And we’ve done it without transitioning our entire stack to a system like Docker, CoreOS, or fully-baked AMIs.
deployment
Turn Errors into Awesome Quickly pinpoint what’s broken and why. Get the context and insights to defeat all application errors. Full-stack error monitoring for your Python apps. Note - Python docs https://rollbar.com/docs/notifier/pyrollbar/ . Sponsor
How code linting will make you awesome at Python ? In Python code reviews I’ve seen over and over that it can be tough for developers to format their Python code in a consistent way: extra whitespace, irregular indentation, and other “sloppiness” then often leads to actual bugs in the program. Luckily automated tools can help with this common problem. Code linters make sure your Python code is always formatted consistently – and their benefits go way beyond that.
code-quality
Check Whether All Items Match a Condition in Python - By Trey Hunner Use of any/all with generator expressions for improved readability and code clarity.
core-python
How to become a hedge fund Python coder, by the CTO of AHL We asked Collier what it takes to code in Python for a major quant fund. – And whether learning how to code as a second career after trading is actually viable. This is what he said.
interview
Python 3 for Computational Science and Engineering It's a free book available for download. This text summarises a number of core ideas relevant to Computational Engineering and Scientific Computing using Python. The emphasis is on introducing some basic Python (programming) concepts that are relevant for numerical algorithms. The later chapters touch upon numerical libraries such as `numpy` and `scipy` each of which deserves much more space than provided here. We aim to enable the reader to learn independently how to use other functionality of these libraries using the available documentation (online and through the packages itself).
book
New Relic Infrastructure Webinar Move fast, with confidence. Learn more about Infrastructure at an upcoming webinar. Sponsor
Django, fast: part 2 In this second follow-up post Patryk Zawadzki makes use of wrk benchmarking tool and shows us the performance of gunicorn, uwsgi, PyPy. Besides benchmarking there is good insights into do's and don't of each deployment option.
django
Python Memory Management One of the things you should know, or at least get a good feel about, is the sizes of basic Python objects. Another thing is how Python manages its memory internally.
memory management
Python and Virtual Environments A tour/tutorial of everything virtualenv.
virtual environment
Lets talk about object “optimizations”: __slots__ and namedtuples.
performance
Text Mining in Python through the HTRC Feature Reader We introduce a toolkit for working with the 13.6 million volume Extracted Features Dataset from the HathiTrust Research Center. You will learn how to peer at the words and trends of any book in the collection, while developing broadly useful Python data analysis skills.
data mining
Awesome Django Admin Curated list of awesome django resources aptly named Awesome Django Admin . If you have seen Awesome Python, it's on the same lines. Contribute to it.
django-admin

Jobs
Python & C++ Senior Developer at LeadSurf Digital Marketing Corporation Ayala Avenue, Makati, NCR, Philippines Developing a customized marketing software or program.


Projects
violentshell/rollmac - 89 Stars, 13 Fork Free networks often impose either a time or data restriction and this can be used quickly. When this happens you can change your mac address and reconnect, but this is annoying, and it takes time. In addition, most networks will ask you to re-accept the terms and conditions of the network in order to continue. Rollmac is designed to automate this process by using the WPAD protocol to discover the login page and automatically re-accept the terms and conditions.
dimmg/flusk - 52 Stars, 2 Fork Flask - SQLAlchemy's declarative base - Docker - custom middleware.
amazon-polly-sample - 12 Stars, 2 Fork This app allows you to easily convert any publicly available RSS content into audio Podcasts, so you can listen to your favorite blogs on mobile devices instead of reading them.
Categories: FLOSS Project Planets

Lintel Technologies: DBSlayer

Planet Python - Thu, 2016-12-01 09:33

DBSlayer is a simpler way to proxy mysql.

DBSlayer can be queried via JSON over HTTP, and the responses can be given in either one of the following supported languages : JSON, PHP and Python which makes processing the database results.

Multi-threaded server written in C.

Features :

  • Reduce configuration
  • Reduce dependencies
  • Handle failovers
  • Simple load balancing
  • Easy to monitor
  • Minimal performance overhead
  • Work in different configuration scenarios
  • Support different programming languages

Installing DBSlayer :

git clone https://github.com/derekg/dbslayer.git ./configure make make install

Database URI :

http://machine:port/db?URLENCODED(JSON OBJECT)

http://machine:port/dbform?URLENCODED(HTML FORM)

Parameters :

SQL – SQL to execute

Example Request :

http://localhost:9090/dbform?SQL=SELECT+name+FROM+emp

Example Response :

{'RESULT': {'HEADER': ['name'], 'ROWS' : [['name']], 'TYPES' : ['MYSQL_TYPE_VAR_STRING'], 'SERVER' : 'servername' }}

Example for python :

import urllib2, urllib, json def dbex(): uri = 'http://localhost:9090/dbform?SQL=%s' data = urllib2.urlopen(uri%urllib.quote('SELECT * FROM market')).read() print json.loads(data) dbex()

Start dbslayer :

dbslayer -c /path/dbslayer.conf -s servername

This starts up a DBSlayer daemon on 9090 (this default port can be changed which acts as a proxy for the backend mysql server. this proxy can be queried via JSON over HTTP).

Stop dbslayer :

pkill dbslayer

Other URI/API endpoints :

http://machine:port/stats [Queries per second]

http://machine:port/stats/log [Last 100 requests]

http://machine:port/stats/errors [Last 100 error]

http://machine:port/shutdown [Only from localhost]

The post DBSlayer appeared first on Lintel Technologies Blog.

Categories: FLOSS Project Planets

Django Weekly: Django Weekly 15th Issue

Planet Python - Thu, 2016-12-01 09:31
Worthy Read
The Changelog 229: Python, Django, and Channels with Andrew Godwin - PodcastDjango core contributor Andrew Godwin joins the show to tell us all about Python and Django. If you've ever wondered why people love Python, what Django's virtues are as a web framework, or how Django Channels measure up to Phoenix's Channels and Rails' Action Cable, this is the show for you. Also: Andrew's take on funding and sustaining open source efforts.
podcast
Django, fast: part 2In this second follow-up post Patryk Zawadzki makes use of wrk benchmarking tool and shows us the performance of gunicorn, uwsgi, PyPy. Besides benchmarking there is good insights into do's and don't of each deployment option.
performance
Turn Errors into AwesomeQuickly pinpoint what’s broken and why. Get the context and insights to defeat all application errors. Full-stack error monitoring for your Python apps. Note - Python docs https://rollbar.com/docs/notifier/pyrollbar/ .
sponsor
Support the Django Software Foundation - FundraisingOur main focus is direct support of Django's developers. This means organizing and funding development sprints so that Django's developers can meet in person and more.
DSF, community, fundraising
Awesome Django AdminJust started with a curated list of awesome django resources aptly named Awesome Django Admin . If you have seen Awesome Python, it's on the same lines. I will be adding more to the list in the next couple of days. Feel free to send a pull request if you have any additions.
admin-panel
Ask HN: Why should I use Django? | Hacker NewsDiscussion on HackerNews. Well we already know why ? ;) . My first deployment in 2007 was because we need a CRUD interface for data residing in DB. Django inspectdb coupled with admin interface did the job. Admin Panel is definitely a top reason to use Django.
discussion
New Relic Infrastructure WebinarNew Relic offers a performance management solution enabling developers to diagnose and fix application performance problems in real time. Move fast, with confidence. Learn more about Infrastructure at an upcoming webinar.
sponsor
Security advisory: Vulnerability in password reset (master branch only)Today, Florian Apolloner, a member of the Django security team, discovered and fixed a critical security issue in the new PasswordResetConfirmView that was added to the Django master branch on July 16th, 2016. The view didn't validate the password reset token on POST requests and therefore allowed anyone to reset passwords for any user.
security
DellaDella is a Django app for managing Secret Santa/Gift Exchange.
open_source_project
How to Add User Profile To Django AdminThere are several ways to extend the the default Django User model. Perhaps one of the most common way (and also less intrusive) is to extend the User model using a one-to-one link. This strategy is also known as User Profile. One of the challenges of this particular strategy, if you are using Django Admin, is how to display the profile data in the User edit page. And that’s what this tutorial is about.
User model, adminpanel

Projects
django-heroku-skeleton - 7 Stars, 0 ForkA modern Django 1.10+ skeleton. Works perfectly with Heroku.
ujson_drf - 4 Stars, 0 ForkJSON Renderer and Parser for Django Rest Framework using the ultra fast json (in C).
Categories: FLOSS Project Planets

Daniel Pocock: Using a fully free OS for devices in the home

Planet Debian - Thu, 2016-12-01 08:11

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?
Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

Categories: FLOSS Project Planets

Long time no write

Planet KDE - Thu, 2016-12-01 07:01

My new job

I’m pretty bad at blogging, so as usual there’s a ton of stuff that has happened since last time I blogged.

In KDE-land I’ve mostly been helping out porting and finishing up porting stuff to KDE frameworks. A lot of smaller stuff like krename and kregexpeditor, but also helping out finishing up the porting of e. g. okular, ktorrent and konsole. And of course also Filelight.

I also worked a bit on new features, like URL hints in Konsole. Press a configurable key combo, numbers show up over recognized link, and press the number to open it: 

I also discovered an old patch from Adam Treat to make Konsole recognize local files in addition to just URLs, which I cleaned up and integrated: 

I also tried to improve the look of search results, but making something that looks good with all color schemes is really hard, so I’m kind of stuck for now. If anyone have any ideas on how to do this properly it would be appreciated. But the last iteration looks like this:

Also a ton of smaller stuff in Konsole, like supporting OCS7 instead of just polling /proc for figuring out the current path, which should improve power saving a tiny bit. And a lot of stuff in various other KDE applications (and other applications, like the thermald Qt interface) and libraries that I don’t remember.

 

Unfortunately I haven’t had as much time for KDE stuff as usual, as I got a new job last year in a pretty cool company, taking up a lot of my time. We haven’t been public until yesterday, so I couldn’t really write much about it publicly, but now I can. We’re making a digital notebook, a tablet device with a e-paper display from E Ink and a digitizer from Wacom. The use-cases we’re targeting are reading, writing and sketching, so it is a pretty specialized device without a web browser or “social” integration and Facebook support and whatnot.

As we’re using Linux and Qt for pretty much everything above the kernel I’m really thankful for the KDE Frameworks, having such a nice collection of high-quality extra libraries makes my life much easier. We’re even using Qt for the HW testing application used in the factory (pictured at the top of the page).

And before anyone asks the obvious question; we don’t have the resources to officially support third-party development, so no official SDK. But I do plan on releasing the toolchain and there will be a way to enable SSH access over USB, so people can play with their own device. This should also allow us to use (L)GPL3 code on the device, using a non-ancient version of bash is nice.

The official page with more information is at https://getremarkable.com/

 


Categories: FLOSS Project Planets

Import Python: Quiz Results

Planet Python - Thu, 2016-12-01 06:21

The Winners of the quiz are Nico Ekkart - @nicoekkart, Chad Heyne, Artem Bezukladichnii, Andrew Nester - andrewnester and Kyle Monson. Congrats. Your prize is on the way.

The right answers to the quiz are

1. Which of the following import will cause an ImportError ? - A

2. Which of the below statement is Idiomatic ? - B

3. What's will be the value of the variable "result" in the below program ? - []

4. What numbers are printed by the below program ? - A

5. What's the default byte-code interpreter of Python ? - CPython

Will be doing some analysis ( read graphic charts ) for you in the coming week.

Jeff Knupp is offering exclusive discount for all Import Python reader on his book "Writing Idiomatic Python". Grab it now.

Categories: FLOSS Project Planets

Codementor: 15 Essential Python Interview Questions

Planet Python - Thu, 2016-12-01 06:01

##Introduction

Looking for a Python job? Chances are you will need to prove that you know how to work with Python. Here are a couple of questions that cover a wide base of skills associated with Python. Focus is placed on the language itself, and not any particular package or framework. Each question will be linked to a suitable tutorial if there is one. Some questions will wrap up multiple topics.

I haven’t actually been given an interview test quite as hard as this one, if you can get to the answers comfortably then go get yourself a job.

##What this tutorial is not

This tutorial does not aim to cover every available workplace culture - different employers will ask you different questions in different ways; they will follow different conventions; they will value different things. They will test you in different ways. Some employers will sit you down in from of a computer and ask you to solve simple problems; some will stand you up in front of a white board and do similar; some will give you a take home test to solve; some will just have a conversation with you.

The best test for a programmer is actually programming. This is a difficult thing to test with a simple tutorial. So for bonus points make sure that you can actually use the functionality demonstrated in the questions. If you actually understand how to get to the answers well enough that you can actually make use of the demonstrated concepts then you are winning.

Similarly, the best test for a software engineer is actually engineering. This tutorial is about Python as a language. Being able to design efficient, effective, maintainable class hierarchies for solving niche problems is great and wonderful and a skill set worth pursuing but well beyond the scope of this text.

Another thing this tutorial is not is PEP8 compliant. This is intentional as, as mentioned before, different employers will follow different conventions. You will need to adapt to fit the culture of the workplace. Because practicality beats purity.

Another thing this tutorial isn’t is concise. I don’t want to just throw questions and answers at you and hope something sticks. I want you to get it, or at least get it well enough that you are in a position to look for further explanations yourself for any problem topics.

Want to ace your technical interview? Schedule a Technical Interview Practice Session with an expert now!

Question 1

What is Python really? You can (and are encouraged) make comparisons to other technologies in your answer

###Answer

Here are a few key points:
- Python is an interpreted language. That means that, unlike languages like C and its variants, Python does not need to be compiled before it is run. Other interpreted languages include PHP and Ruby.

  • Python is dynamically typed, this means that you don’t need to state the types of variables when you declare them or anything like that. You can do things like x=111 and then x="I'm a string" without error

  • Python is well suited to object orientated programming in that it allows the definition of classes along with composition and inheritance. Python does not have access specifiers (like C++’s public, private), the justification for this point is given as “we are all adults here”

  • in Python, functions are first-class objects. This means that they can be assigned to variables, returned from other functions and passed into functions. Classes are also first class objects

  • Writing Python code is quick but running it is often slower than compiled languages. Fortunately, Python allows the inclusion of C based extensions so bottlenecks can be optimised away and often are. The numpy package is a good example of this, it’s really quite quick because a lot of the number crunching it does isn’t actually done by Python

  • Python finds use in many spheres - web applications, automation, scientific modelling, big data applications and many more. It’s also often used as “glue” code to get other languages and components to play nice.

  • Python makes difficult things easy so programmers can focus on overriding algorithms and structures rather than nitty-gritty low level details.

Why this matters:

If you are applying for a Python position, you should know what it is and why it is so gosh-darn cool. And why it isn’t o.O

Question 2

Fill in the missing code:

def print_directory_contents(sPath): """ This function takes the name of a directory and prints out the paths files within that directory as well as any files contained in contained directories. This function is similar to os.walk. Please don't use os.walk in your answer. We are interested in your ability to work with nested structures. """ fill_this_in Answer def print_directory_contents(sPath): import os for sChild in os.listdir(sPath): sChildPath = os.path.join(sPath,sChild) if os.path.isdir(sChildPath): print_directory_contents(sChildPath) else: print(sChildPath) Pay special attention
  • be consistent with your naming conventions. If there is a naming convention evident in any sample code, stick to it. Even if it is not the naming convention you usually use
  • recursive functions need to recurse and terminate. Make sure you understand how this happens so that you avoid bottomless callstacks
  • we use the os module for interacting with the operating system in a way that is cross platform. You could say sChildPath = sPath + '/' + sChild but that wouldn’t work on windows
  • familiarity with base packages is really worthwhile, but don’t break your head trying to memorize everything, Google is your friend in the workplace!
  • ask questions if you don’t understand what the code is supposed to do
  • KISS! Keep it Simple, Stupid!
Why this matters:
  • displays knowledge of basic operating system interaction stuff
  • recursion is hella useful
Question 3

Looking at the below code, write down the final values of A0, A1, …An.

A0 = dict(zip(('a','b','c','d','e'),(1,2,3,4,5))) A1 = range(10) A2 = sorted([i for i in A1 if i in A0]) A3 = sorted([A0[s] for s in A0]) A4 = [i for i in A1 if i in A3] A5 = {i:i*i for i in A1} A6 = [[i,i*i] for i in A1]

If you dont know what zip is don’t stress out. No sane employer will expect you to memorize the standard library. Here is the output of help(zip).

zip(...) zip(seq1 [, seq2 [...]]) -> [(seq1[0], seq2[0] ...), (...)] Return a list of tuples, where each tuple contains the i-th element from each of the argument sequences. The returned list is truncated in length to the length of the shortest argument sequence.

If that doesn’t make sense then take a few minutes to figure it out however you choose to.

Answer A0 = {'a': 1, 'c': 3, 'b': 2, 'e': 5, 'd': 4} # the order may vary A1 = range(0, 10) # or [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] in python 2 A2 = [] A3 = [1, 3, 2, 5, 4] A4 = [1, 2, 3, 4, 5] A5 = {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81} A6 = [[0, 0], [1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36], [7, 49], [8, 64], [9, 81]] Why this is important
  1. List comprehension is a wonderful time saver and a big stumbling block for a lot of people
  2. if you can read them you can probably write them down
  3. some of this code was made to be deliberately weird. You may need to work with some weird people

Get Your Python Code Reviewed

Question 4

Python and multi-threading. Is it a good idea? List some ways to get some Python code to run in a parallel way.

Answer

Python doesn’t allow multi-threading in the truest sense of the word. It has a multi-threading package but if you want to multi-thread to speed your code up, then it’s usually not a good idea to use it. Python has a construct called the Global Interpreter Lock (GIL). The GIL makes sure that only one of your ‘threads’ can execute at any one time. A thread acquires the GIL, does a little work, then passes the GIL onto the next thread. This happens very quickly so to the human eye it may seem like your threads are executing in parallel, but they are really just taking turns using the same CPU core. All this GIL passing adds overhead to execution. This means that if you want to make your code run faster then using the threading package often isn’t a good idea.

There are reasons to use Python’s threading package. If you want to run some things simultaneously, and efficiency is not a concern, then it’s totally fine and convenient. Or if you are running code that needs to wait for something (like some IO) then it could make a lot of sense. But the threading library wont let you use extra CPU cores.

Multi-threading can be outsourced to the operating system (by doing multi-processing), some external application that calls your Python code (eg, Spark or Hadoop), or some code that your Python code calls (eg: you could have your Python code call a C function that does the expensive multi-threaded stuff).

###Why this is important
Because the GIL is an A-hole. Lots of people spend a lot of time trying to find bottlenecks in their fancy Python multi-threaded code before they learn what the GIL is.

Question 5

How do you keep track of different versions of your code?

Answer:

Version control! At this point, you should act excited and tell them how you even use Git (or whatever is your favorite) to keep track of correspondence with Granny. Git is my preferred version control system, but there are others, for example subversion.

Why this is important:

Because code without version control is like coffee without a cup. Sometimes we need to write once-off throw away scripts and that’s ok, but if you are dealing with any significant amount of code, a version control system will be a benefit. Version Control helps with keeping track of who made what change to the code base; finding out when bugs were introduced to the code; keeping track of versions and releases of your software; distributing the source code amongst team members; deployment and certain automations. It allows you to roll your code back to before you broke it which is great on its own. Lots of stuff. It’s just great.

Question 6

What does this code output:

def f(x,l=[]): for i in range(x): l.append(i*i) print(l) f(2) f(3,[3,2,1]) f(3)

###Answer

[0, 1] [3, 2, 1, 0, 1, 4] [0, 1, 0, 1, 4]

###Hu?
The first function call should be fairly obvious, the loop appends 0 and then 1 to the empty list, l. l is a name for a variable that points to a list stored in memory.
The second call starts off by creating a new list in a new block of memory. l then refers to this new list. It then appends 0, 1 and 4 to this new list. So that’s great.
The third function call is the weird one. It uses the original list stored in the original memory block. That is why it starts off with 0 and 1.

Try this out if you don’t understand:
```python
l_mem = []

l = l_mem # the first call
for i in range(2):
l.append(i*i)

print(l) # [0, 1]

l = [3,2,1] # the second call
for i in range(3):
l.append(i*i)

print(l) # [3, 2, 1, 0, 1, 4]

l = l_mem # the third call
for i in range(3):
l.append(i*i)

print(l) # [0, 1, 0, 1, 4]
```

Question 7

What is monkey patching and is it ever a good idea?

Answer

Monkey patching is changing the behaviour of a function or object after it has already been defined. For example:

import datetime datetime.datetime.now = lambda: datetime.datetime(2012, 12, 12)

Most of the time it’s a pretty terrible idea - it is usually best if things act in a well-defined way. One reason to monkey patch would be in testing. The mock package is very useful to this end.

###Why does this matter?
It shows that you understand a bit about methodologies in unit testing. Your mention of monkey avoidance will show that you aren’t one of those coders who favor fancy code over maintainable code (they are out there, and they suck to work with). Remember the principle of KISS? And it shows that you know a little bit about how Python works on a lower level, how functions are actually stored and called and suchlike.

PS: it’s really worth reading a little bit about mock if you haven’t yet. It’s pretty useful.

Question 8

What does this stuff mean: *args, **kwargs? And why would we use it?

Answer

Use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we dont know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments. The identifiers args and kwargs are a convention, you could also use *bob and **billy but that would not be wise.

Here is a little illustration:

def f(*args,**kwargs): print(args, kwargs) l = [1,2,3] t = (4,5,6) d = {'a':7,'b':8,'c':9} f() f(1,2,3) # (1, 2, 3) {} f(1,2,3,"groovy") # (1, 2, 3, 'groovy') {} f(a=1,b=2,c=3) # () {'a': 1, 'c': 3, 'b': 2} f(a=1,b=2,c=3,zzz="hi") # () {'a': 1, 'c': 3, 'b': 2, 'zzz': 'hi'} f(1,2,3,a=1,b=2,c=3) # (1, 2, 3) {'a': 1, 'c': 3, 'b': 2} f(*l,**d) # (1, 2, 3) {'a': 7, 'c': 9, 'b': 8} f(*t,**d) # (4, 5, 6) {'a': 7, 'c': 9, 'b': 8} f(1,2,*t) # (1, 2, 4, 5, 6) {} f(q="winning",**d) # () {'a': 7, 'q': 'winning', 'c': 9, 'b': 8} f(1,2,*t,q="winning",**d) # (1, 2, 4, 5, 6) {'a': 7, 'q': 'winning', 'c': 9, 'b': 8} def f2(arg1,arg2,*args,**kwargs): print(arg1,arg2, args, kwargs) f2(1,2,3) # 1 2 (3,) {} f2(1,2,3,"groovy") # 1 2 (3, 'groovy') {} f2(arg1=1,arg2=2,c=3) # 1 2 () {'c': 3} f2(arg1=1,arg2=2,c=3,zzz="hi") # 1 2 () {'c': 3, 'zzz': 'hi'} f2(1,2,3,a=1,b=2,c=3) # 1 2 (3,) {'a': 1, 'c': 3, 'b': 2} f2(*l,**d) # 1 2 (3,) {'a': 7, 'c': 9, 'b': 8} f2(*t,**d) # 4 5 (6,) {'a': 7, 'c': 9, 'b': 8} f2(1,2,*t) # 1 2 (4, 5, 6) {} f2(1,1,q="winning",**d) # 1 1 () {'a': 7, 'q': 'winning', 'c': 9, 'b': 8} f2(1,2,*t,q="winning",**d) # 1 2 (4, 5, 6) {'a': 7, 'q': 'winning', 'c': 9, 'b': 8} Why do we care?

Sometimes we will need to pass an unknown number of arguments or keyword arguments into a function. Sometimes we will want to store arguments or keyword arguments for later use. Sometimes it’s just a time saver.

Question 9

What do these mean to you: @classmethod, @staticmethod, @property?

###Answer Background knowledge

These are decorators. A decorator is a special kind of function that either takes a function and returns a function, or takes a class and returns a class. The @ symbol is just syntactic sugar that allows you to decorate something in a way that’s easy to read.

@my_decorator def my_func(stuff): do_things

Is equivalent to
```python
def my_func(stuff):
do_things

my_func = my_decorator(my_func)
```

You can find a tutorial on how decorators in general work here.

###Actual Answer
The decorators @classmethod, @staticmethod and @property are used on functions defined within classes. Here is how they behave:

class MyClass(object): def __init__(self): self._some_property = "properties are nice" self._some_other_property = "VERY nice" def normal_method(*args,**kwargs): print("calling normal_method({0},{1})".format(args,kwargs)) @classmethod def class_method(*args,**kwargs): print("calling class_method({0},{1})".format(args,kwargs)) @staticmethod def static_method(*args,**kwargs): print("calling static_method({0},{1})".format(args,kwargs)) @property def some_property(self,*args,**kwargs): print("calling some_property getter({0},{1},{2})".format(self,args,kwargs)) return self._some_property @some_property.setter def some_property(self,*args,**kwargs): print("calling some_property setter({0},{1},{2})".format(self,args,kwargs)) self._some_property = args[0] @property def some_other_property(self,*args,**kwargs): print("calling some_other_property getter({0},{1},{2})".format(self,args,kwargs)) return self._some_other_property o = MyClass() # undecorated methods work like normal, they get the current instance (self) as the first argument o.normal_method # <bound method MyClass.normal_method of <__main__.MyClass instance at 0x7fdd2537ea28>> o.normal_method() # normal_method((<__main__.MyClass instance at 0x7fdd2537ea28>,),{}) o.normal_method(1,2,x=3,y=4) # normal_method((<__main__.MyClass instance at 0x7fdd2537ea28>, 1, 2),{'y': 4, 'x': 3}) # class methods always get the class as the first argument o.class_method # <bound method classobj.class_method of <class __main__.MyClass at 0x7fdd2536a390>> o.class_method() # class_method((<class __main__.MyClass at 0x7fdd2536a390>,),{}) o.class_method(1,2,x=3,y=4) # class_method((<class __main__.MyClass at 0x7fdd2536a390>, 1, 2),{'y': 4, 'x': 3}) # static methods have no arguments except the ones you pass in when you call them o.static_method # <function static_method at 0x7fdd25375848> o.static_method() # static_method((),{}) o.static_method(1,2,x=3,y=4) # static_method((1, 2),{'y': 4, 'x': 3}) # properties are a way of implementing getters and setters. It's an error to explicitly call them # "read only" attributes can be specified by creating a getter without a setter (as in some_other_property) o.some_property # calling some_property getter(<__main__.MyClass instance at 0x7fb2b70877e8>,(),{}) # 'properties are nice' o.some_property() # calling some_property getter(<__main__.MyClass instance at 0x7fb2b70877e8>,(),{}) # Traceback (most recent call last): # File "<stdin>", line 1, in <module> # TypeError: 'str' object is not callable o.some_other_property # calling some_other_property getter(<__main__.MyClass instance at 0x7fb2b70877e8>,(),{}) # 'VERY nice' # o.some_other_property() # calling some_other_property getter(<__main__.MyClass instance at 0x7fb2b70877e8>,(),{}) # Traceback (most recent call last): # File "<stdin>", line 1, in <module> # TypeError: 'str' object is not callable o.some_property = "groovy" # calling some_property setter(<__main__.MyClass object at 0x7fb2b7077890>,('groovy',),{}) o.some_property # calling some_property getter(<__main__.MyClass object at 0x7fb2b7077890>,(),{}) # 'groovy' o.some_other_property = "very groovy" # Traceback (most recent call last): # File "<stdin>", line 1, in <module> # AttributeError: can't set attribute o.some_other_property # calling some_other_property getter(<__main__.MyClass object at 0x7fb2b7077890>,(),{}) # 'VERY nice' Question 10

Consider the following code, what will it output?

class A(object): def go(self): print("go A go!") def stop(self): print("stop A stop!") def pause(self): raise Exception("Not Implemented") class B(A): def go(self): super(B, self).go() print("go B go!") class C(A): def go(self): super(C, self).go() print("go C go!") def stop(self): super(C, self).stop() print("stop C stop!") class D(B,C): def go(self): super(D, self).go() print("go D go!") def stop(self): super(D, self).stop() print("stop D stop!") def pause(self): print("wait D wait!") class E(B,C): pass a = A() b = B() c = C() d = D() e = E() # specify output from here onwards a.go() b.go() c.go() d.go() e.go() a.stop() b.stop() c.stop() d.stop() e.stop() a.pause() b.pause() c.pause() d.pause() e.pause() Answer

The output is specified in the comments in the segment below:

a.go() # go A go! b.go() # go A go! # go B go! c.go() # go A go! # go C go! d.go() # go A go! # go C go! # go B go! # go D go! e.go() # go A go! # go C go! # go B go! a.stop() # stop A stop! b.stop() # stop A stop! c.stop() # stop A stop! # stop C stop! d.stop() # stop A stop! # stop C stop! # stop D stop! e.stop() # stop A stop! a.pause() # ... Exception: Not Implemented b.pause() # ... Exception: Not Implemented c.pause() # ... Exception: Not Implemented d.pause() # wait D wait! e.pause() # ...Exception: Not Implemented Why do we care?

Because OO programming is really, really important. Really. Answering this question shows your understanding of inheritance and the use of Python’s super function. Most of the time the order of resolution doesn’t matter. Sometimes it does, it depends on your application.

Question 11

Consider the following code, what will it output?

class Node(object): def __init__(self,sName): self._lChildren = [] self.sName = sName def __repr__(self): return "<Node '{}'>".format(self.sName) def append(self,*args,**kwargs): self._lChildren.append(*args,**kwargs) def print_all_1(self): print(self) for oChild in self._lChildren: oChild.print_all_1() def print_all_2(self): def gen(o): lAll = [o,] while lAll: oNext = lAll.pop(0) lAll.extend(oNext._lChildren) yield oNext for oNode in gen(self): print(oNode) oRoot = Node("root") oChild1 = Node("child1") oChild2 = Node("child2") oChild3 = Node("child3") oChild4 = Node("child4") oChild5 = Node("child5") oChild6 = Node("child6") oChild7 = Node("child7") oChild8 = Node("child8") oChild9 = Node("child9") oChild10 = Node("child10") oRoot.append(oChild1) oRoot.append(oChild2) oRoot.append(oChild3) oChild1.append(oChild4) oChild1.append(oChild5) oChild2.append(oChild6) oChild4.append(oChild7) oChild3.append(oChild8) oChild3.append(oChild9) oChild6.append(oChild10) # specify output from here onwards oRoot.print_all_1() oRoot.print_all_2() Answer

oRoot.print_all_1() prints:

<Node 'root'> <Node 'child1'> <Node 'child4'> <Node 'child7'> <Node 'child5'> <Node 'child2'> <Node 'child6'> <Node 'child10'> <Node 'child3'> <Node 'child8'> <Node 'child9'>

oRoot.print_all_2() prints:

<Node 'root'> <Node 'child1'> <Node 'child2'> <Node 'child3'> <Node 'child4'> <Node 'child5'> <Node 'child6'> <Node 'child8'> <Node 'child9'> <Node 'child7'> <Node 'child10'> Why do we care?

Because composition and object construction is what objects are all about. Objects are composed of stuff and they need to be initialised somehow. This also ties up some stuff about recursion and use of generators.

Generators are great. You could have achieved similar functionality to print_all_2 by just constructing a big long list and then printing it’s contents. One of the nice things about generators is that they don’t need to take up much space in memory.

It is also worth pointing out that print_all_1 traverses the tree in a depth-first manner, while print_all_2 is width-first. Make sure you understand those terms. Sometimes one kind of traversal is more appropriate than the other. But that depends very much on your application.

Question 12

Describe Python’s garbage collection mechanism in brief.

Answer

A lot can be said here. There are a few main points that you should mention:

  • Python maintains a count of the number of references to each object in memory. If a reference count goes to zero then the associated object is no longer live and the memory allocated to that object can be freed up for something else
  • occasionally things called “reference cycles” happen. The garbage collector periodically looks for these and cleans them up. An example would be if you have two objects o1 and o2 such that o1.x == o2 and o2.x == o1. If o1 and o2 are not referenced by anything else then they shouldn’t be live. But each of them has a reference count of 1.
  • Certain heuristics are used to speed up garbage collection. For example, recently created objects are more likely to be dead. As objects are created, the garbage collector assigns them to generations. Each object gets one generation, and younger generations are dealt with first.

This explanation is CPython specific.

Question 13

Place the following functions below in order of their efficiency. They all take in a list of numbers between 0 and 1. The list can be quite long. An example input list would be [random.random() for i in range(100000)]. How would you prove that your answer is correct?

def f1(lIn): l1 = sorted(lIn) l2 = [i for i in l1 if i<0.5] return [i*i for i in l2] def f2(lIn): l1 = [i for i in lIn if i<0.5] l2 = sorted(l1) return [i*i for i in l2] def f3(lIn): l1 = [i*i for i in lIn] l2 = sorted(l1) return [i for i in l1 if i<(0.5*0.5)] Answer

Most to least efficient: f2, f1, f3. To prove that this is the case, you would want to profile your code. Python has a lovely profiling package that should do the trick.

import cProfile lIn = [random.random() for i in range(100000)] cProfile.run('f1(lIn)') cProfile.run('f2(lIn)') cProfile.run('f3(lIn)')

For completion’s sake, here is what the above profile outputs:

>>> cProfile.run('f1(lIn)') 4 function calls in 0.045 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.009 0.009 0.044 0.044 <stdin>:1(f1) 1 0.001 0.001 0.045 0.045 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.035 0.035 0.035 0.035 {sorted} >>> cProfile.run('f2(lIn)') 4 function calls in 0.024 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 0.023 0.023 <stdin>:1(f2) 1 0.001 0.001 0.024 0.024 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.016 0.016 0.016 0.016 {sorted} >>> cProfile.run('f3(lIn)') 4 function calls in 0.055 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.016 0.016 0.054 0.054 <stdin>:1(f3) 1 0.001 0.001 0.055 0.055 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.038 0.038 0.038 0.038 {sorted} Why do we care?

Locating and avoiding bottlenecks is often pretty worthwhile. A lot of coding for efficiency comes down to common sense - in the example above it’s obviously quicker to sort a list if it’s a smaller list, so if you have the choice of filtering before a sort it’s often a good idea. The less obvious stuff can still be located through use of the proper tools. It’s good to know about these tools.

Question 14

Something you failed at?

Wrong answer

I never fail!

Why this is important:

Shows that you are capable of admitting errors, taking responsibility for your mistakes, and learning from your mistakes. All of these things are pretty darn important if you are going to be useful. If you are actually perfect then too bad, you might need to get creative here.

Question 15

Do you have any personal projects?

Really?

This shows that you are willing to do more than the bare minimum in terms of keeping your skillset up to date. If you work on personal projects and code outside of the workplace then employers are more likely to see you as an asset that will grow. Even if they don’t ask this question I find it’s useful to broach the subject.

Conclusion

These questions intentionally touched on many topics. And the answers were intentionally verbose. In a programming interview, you will need to demonstrate your understanding and if you can do it in a concise way then by all means do that. I tried to give enough information in the answers that you could glean some meaning from them even if you had never heard of some of these topics before. I hope you find this useful in your job hunt.

Go get ‘em tiger.

Categories: FLOSS Project Planets

SystemSeed: Can services adopt the future of devops?

Planet Drupal - Thu, 2016-12-01 04:36

Startups and products can move faster than agencies that serve clients as there is no feedback loops and manual QA steps by an external authority that can halt a build going live.

One of the roundtable discussions that popped up this week while we’re all in Minsk is that agencies which practice Agile transparently as SystemSeed do see a common trade-off. CI/CD (Continuous Integration / Continuous Deployment) isn’t quite possible as long as you have manual QA and that lead time baked-in.

Non-Agile (or “Waterfall”) agencies can potentially supply work faster but without any insight by the client, inevitably then needing change requests which I’ve always visualised as the false economy of Waterfall as demonstrated here: 

Would the client prefer Waterfall+change requests and being kept in the dark throughout the development but all work is potentially delivered faster (and never in the final state), or would they prefer full transparency, having to check all story details, QA and sign off as well as multi-stakeholder oversight… in short - it can get complicated.

CI and CD isn’t truly possible when a manual review step is mandatory. Today we maintain a thorough manual QA by ourselves and our clients before deploy using a “standard” (feature branch -> dev -> stage -> production) devops process, where manual QA and automated test suites occur both at the feature branch level and just before deployment (Stage). Pantheon provides this hosting infrastructure and makes this simple as visualised below:

This week we brainstormed Blue & Green live environments which may allow for full Continuous Integration whereby deploys are automated whenever scripted tests pass, specifically without manual client sign off. What this does is add a fully live clone of the Production environment to the chain whereby new changes are always deployed out to the clone of live and at any time the system can be switched from pointing at the “Green” production environment, to the “Blue” clone or back again.

Assuming typical rollbacks are simple and databases are either in sync or both Green and Blue codebases link to a single DB, then this theory is well supported and could well be the future of devops. Especially when deploys are best made “immediately” and not the next morning or in times of low traffic.

In this case clients would be approving work already deployed to a production-ready environment which will be switched to as soon as their manual QA step is completed.

One argument made was that our Pantheon standard model allows for this in Stage already, we just need an automated process to push from Stage to Live once QA is passed. We’ll write more on this if our own processes move in this direction.

Categories: FLOSS Project Planets

Brian Okken: 26: pyresttest – Sam Van Oort

Planet Python - Thu, 2016-12-01 03:05

Interview with Sam Van Oort about pyresttest, “A REST testing and API microbenchmarking tool” pyresttest A question in the Test & Code Slack channel was raised about testing REST APIs. There were answers such as pytest + requests, of course, but there was also a mention of pyresttest, https://github.com/svanoort/pyresttest, which I hadn’t heard of. I […]

The post 26: pyresttest – Sam Van Oort appeared first on Python Testing.

Categories: FLOSS Project Planets

Brian Okken: 25: Selenium, pytest, Mozilla – Dave Hunt

Planet Python - Thu, 2016-12-01 02:45

Interview with Dave Hunt @davehunt82. We Cover: Selenium Driver: http://www.seleniumhq.org/ pytest: http://docs.pytest.org/ pytest plugins: pytest-selenium: http://pytest-selenium.readthedocs.io/ pytest-html: https://pypi.python.org/pypi/pytest-html pytest-variables: https://pypi.python.org/pypi/pytest-variables tox: https://tox.readthedocs.io Dave Hunt’s “help wanted” list on github: Dave Hunt’s help wanted list Mozilla: https://www.mozilla.org Also: fixtures xfail CI and xfail and html reports CI and capturing pytest code sprint working remotely for Mozilla […]

The post 25: Selenium, pytest, Mozilla – Dave Hunt appeared first on Python Testing.

Categories: FLOSS Project Planets

KDevelop: Seeking maintainer for Ruby language support

Planet KDE - Wed, 2016-11-30 18:39

Heya,

just a short heads-up that KDevelop is seeking for a new maintainer for the Ruby language support. Miquel Sabaté did an amazing job maintaining the plugin in the recent years, but would like to step down as maintainer because he's lacking time to continue looking after it.

Here's an excerpt from a mail Miquel kindly provided, to make it easier for newcomers to follow-up on his work in kdev-ruby:

As you might know the development of kdev-ruby has stalled and the KDevelop team is looking for developers that want to work with it. The plugin is still considered
experimental and that's because there is still plenty of work to be done. What has been
done so far:

  • The parser is based on the one that can be found on MRI. That being said, it's based on an old version of it so you might want to update it.
  • The DUChain code is mostly done but it's not stable yet, so there's quite some work to be done on this front too.
  • Code completion mostly works but it's quite basic.
  • Ruby on Rails navigation is done and works.

There is a lot of work to be done and I'm honestly skeptical whether this approach will end up working anyways. Because of this skepticism and the fact that I was using another editor, I ended up abandoning the project and thus kdev-ruby was no longer maintained by anyone.

If you feel that you can take the challenge and you want to contribute to kdev-ruby, please reach out to the KDevelop team. They are extremely friendly and will guide you on the process of developing this plugin.

Again, thanks for all your work Miquel, you will be missed!

If you're interested in that kind of KDevelop plugin development, please get in touch with us!

More information about kdev-ruby here: https://community.kde.org/KDevelop/Ruby

Categories: FLOSS Project Planets

Finding a valid build order for KDE repositories

Planet KDE - Wed, 2016-11-30 18:13

KDE has been lately been growing quite a bit in repositories, and it's not always easy to tell what needs to be build before, do i build first kdepim-apps-libs or pimcommon?

A few days ago i was puzzled by the same question and realized we have the answer in the dependency-data-* files from the kde-build-metadata repository.

They define what depends on what so what we need to do is just build a graph with those dependencies and get a valid build order from it.

Thankfully python already has a module for graphs and stuff so build-order.py was not that hard to write.

So say you want to know a valid build order for the stable repositories based on kf5-qt5

Here it is

Note i've been saying *a* valid build order, not *the* valid build order, since there are various orders that are valid since not every repo depends other repos.

Now i wonder, does anyone else find this useful? And if so to which repository do you think i should commit such script?

Categories: FLOSS Project Planets
Syndicate content