FLOSS Project Planets

KDE Edu Sprint 2017

Planet KDE - Mon, 2017-12-11 10:22

Two months ago I attended to KDE Edu Sprint 2017 at Berlin. It was my first KDE sprint (really, I send code to KDE software since 2010 and never went to a sprint!) so I was really excited for the event.

KDE Edu is the an umbrella for specific educational software of KDE. There are a lot of them and it is the main educational software suite in free software world. Despite it, KDE Edu has received little attention in organization side, for instance the previous KDE Edu sprint occurred several years ago, our website has some problems, and more.

Therefore, this sprint was an opportunity not only for developers work in software development, but for works in organization side as well.

In organization work side, we discuss about the rebranding of some software more related to university work than for “education” itself, like Cantor and Labplot. There was a wish to create something like a KDE Research/Science in order to put software like them and others like Kile and KBibTex in a same umbrella. There is a discussion about this theme.

Other topic in this point was the discussions about a new website, more oriented to teach how to use KDE software in educational context than present a set of software. In fact, I think we need to do it and strengthen the “KDE Edu brand” in order to have a specific icon+link in KDE products page.

Follow, the developers in the sprint agreed with the multi operating system policy for KDE Edu. KDE software can be built and distributed to users of several OS, not only Linux. During the sprint some developers worked to bring installers for Windows, Mac OS, porting applications to Android, and creating independent installers for Linux distributions using flatpak.

Besides the discussions in this point, I worked to bring a rule to send e-mail to KDE Edu mailing list for each new Differential Revisions of KDE Edu software in Phabricator. Sorry devs, our mailboxes are full of e-mails because me.

Now in development work side, my focus was work hard on Cantor. First, I made some task triage in our workboard, closing, opening, and putting more information in some tasks. Secondly, I reviewed some works made by Rishabh Gupta, my student during GSoC 2017. He ported the Lua and R backend to QProcess and it will be available soon.

After it I worked to port Python 3 backend to Python/C API. This work is in progress and I expect to finish it to release in 18.04.

Of course, besides this amount of work we have fun with some beers and German food (and some American food and Chinese food and Arab food and Italian food as well)! I was happy because my 31 years birthday was in the first day of the sprint, so thank you KDE for coming to my birthday party full of code and good beers and pork dishes.

To finish, it is always a pleasure to meet the gearheads like my Spanish friends Albert and Aleix, the only other Mageia user I found personally in my life Timothée, my GSoC student Rishabh, my irmão brasileiro Sandro, and the new friends Sanjiban and David.

Thank you KDE e.V for provide resources to the sprint and thank you Endocode for hosting the sprint.

Categories: FLOSS Project Planets

Roy Scholten: Thanks to Dropsolid, my first Drupal contribution sponsor

Planet Drupal - Mon, 2017-12-11 10:19
11 Dec 2017 Thanks to Dropsolid, my first Drupal contribution sponsor

Creating time and space to do the work

DropSolid in Belgium are the first organisation to sponsor some of my time to work on Drupal core.

It is very liberating to set apart dedicated time for tackling bigger chunks of Drupal core work instead of sneaking in bits and pieces at the edges of the day.

In the time available I was able to:

Many thanks to Nick Veenhof for reaching out and to DropSolid for supporting my work to help design a better Drupal!

I’m open to more organisations sponsoring me to work on the ux and product side of Drupal core. If that is something you are interested in, let me know.

Tags drupalplanet
Categories: FLOSS Project Planets

Ixis.co.uk - Thoughts: Last Month in Drupal - November 2017

Planet Drupal - Mon, 2017-12-11 10:00
November saw the Drupal Association bring us the Q2 2017 Financial state summary. We were provided with an update as to what is new on Drupal.org, this included information regarding the community page being given its own place to live, it has now been given a proper section with its own blog.
Categories: FLOSS Project Planets

Real Python: Building a Simple Web App with Bottle, SQLAlchemy, and the Twitter API

Planet Python - Mon, 2017-12-11 09:47

This is a guest blog post by Bob Belderbos. Bob is a driven Pythonista working as a software developer at Oracle. He is also co-founder of PyBites, a Python blog featuring code challenges, articles, and news. Bob is passionate about automation, data, web development, code quality, and mentoring other developers.

Last October we challenged our PyBites’ audience to make a web app to better navigate the Daily Python Tip feed. In this article, I’ll share what I built and learned along the way.

In this article you will learn:

  1. How to clone the project repo and set up the app.
  2. How to use the Twitter API via the Tweepy module to load in the tweets.
  3. How to use SQLAlchemy to store and manage the data (tips and hashtags).
  4. How to build a simple web app with Bottle, a micro web-framework similar to Flask.
  5. How to use the pytest framework to add tests.
  6. How Better Code Hub’s guidance led to more maintainable code.

If you want to follow along, reading the code in detail (and possibly contribute), I suggest you fork the repo. Let’s get started.

Project Setup

First, Namespaces are one honking great idea so let’s do our work in a virtual environment. Using Anaconda I create it like so:

1 $ virtualenv -p <path-to-python-to-use> ~/virtualenvs/pytip

Create a production and a test database in Postgres:

1 2 3 4 5 6 7 8 $ psql psql (9.6.5, server 9.6.2) Type "help" for help. # create database pytip; CREATE DATABASE # create database pytip_test; CREATE DATABASE

We’ll need credentials to connect to the the database and the Twitter API (create a new app first). As per best practice configuration should be stored in the environment, not the code. Put the following env variables at the end of ~/virtualenvs/pytip/bin/activate, the script that handles activation / deactivation of your virtual environment, making sure to update the variables for your environment:

1 2 3 4 5 6 7 8 export DATABASE_URL='postgres://postgres:password@localhost:5432/pytip' # twitter export CONSUMER_KEY='xyz' export CONSUMER_SECRET='xyz' export ACCESS_TOKEN='xyz' export ACCESS_SECRET='xyz' # if deploying it set this to 'heroku' export APP_LOCATION=local

In the deactivate function of the same script, I unset them so we keep things out of the shell scope when deactivating (leaving) the virtual environment:

1 2 3 4 5 6 unset DATABASE_URL unset CONSUMER_KEY unset CONSUMER_SECRET unset ACCESS_TOKEN unset ACCESS_SECRET unset APP_LOCATION

Now is a good time to activate the virtual environment:

1 $ source ~/virtualenvs/pytip/bin/activate

Clone the repo and, with the virtual environment enabled, install the requirements:

1 2 $ git clone https://github.com/pybites/pytip && cd pytip $ pip install -r requirements.txt

Next, we import the collection of tweets with:

1 $ python tasks/import_tweets.py

Then, verify that the tables were created and the tweets were added:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 $ psql \c pytip pytip=# \dt List of relations Schema | Name | Type | Owner --------+----------+-------+---------- public | hashtags | table | postgres public | tips | table | postgres (2 rows) pytip=# select count(*) from tips; count ------- 222 (1 row) pytip=# select count(*) from hashtags; count ------- 27 (1 row) pytip=# \q

Now let’s run the tests:

1 2 3 4 5 6 7 8 9 10 $ pytest ========================== test session starts ========================== platform darwin -- Python 3.6.2, pytest-3.2.3, py-1.4.34, pluggy-0.4.0 rootdir: realpython/pytip, inifile: collected 5 items tests/test_tasks.py . tests/test_tips.py .... ========================== 5 passed in 0.61 seconds ==========================

And lastly run the Bottle app with:

1 $ python app.py

Browse to http://localhost:8080 and voilà: you should see the tips sorted descending on popularity. Clicking on a hashtag link at the left, or using the search box, you can easily filter them. Here we see the pandas tips for example:

The design I made with MUI – a lightweight CSS framework that follows Google’s Material Design guidelines.

Implementation Details The DB and SQLAlchemy

I used SQLAlchemy to interface with the DB to prevent having to write a lot of (redundant) SQL.

In tips/models.py, we define our models – Hashtag and Tip – that SQLAlchemy will map to DB tables:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 from sqlalchemy import Column, Sequence, Integer, String, DateTime from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Hashtag(Base): __tablename__ = 'hashtags' id = Column(Integer, Sequence('id_seq'), primary_key=True) name = Column(String(20)) count = Column(Integer) def __repr__(self): return "<Hashtag('%s', '%d')>" % (self.name, self.count) class Tip(Base): __tablename__ = 'tips' id = Column(Integer, Sequence('id_seq'), primary_key=True) tweetid = Column(String(22)) text = Column(String(300)) created = Column(DateTime) likes = Column(Integer) retweets = Column(Integer) def __repr__(self): return "<Tip('%d', '%s')>" % (self.id, self.text)

In tips/db.py, we import these models, and now it’s easy to work with the DB, for example to interface with the Hashtag model:

1 2 def get_hashtags(): return session.query(Hashtag).order_by(Hashtag.name.asc()).all()

And:

1 2 3 4 def add_hashtags(hashtags_cnt): for tag, count in hashtags_cnt.items(): session.add(Hashtag(name=tag, count=count)) session.commit() Query the Twitter API

We need to retrieve the data from Twitter. For that, I created tasks/import_tweets.py. I packaged this under tasks because it should be run in a daily cronjob to look for new tips and update stats (number of likes and retweets) on existing tweets. For the sake of simplicity I have the tables recreated daily. If we start to rely on FK relations with other tables we should definitely choose update statements over delete+add.

We used this script in the Project Setup. Let’s see what it does in more detail.

First, we create an API session object which we pass to tweepy.Cursor. This feature of the API is really nice: it deals with pagination, iterating through the timeline. For the amount of tips – 222 at the time I write this – it’s really fast. The exclude_replies=True and include_rts=False arguments are convenient because we only want Daily Python Tip’s own tweets (not re-tweets).

Extracting hashtags from the tips requires very little code.

First, I defined a regex for a tag:

1 TAG = re.compile(r'#([a-z0-9]{3,})')

Then, I used findall to get all tags.

I passed them to collections.Counter which returns a dict like object with the tags as keys, and counts as values, ordered in descending order by values (most common). I excluded the too common python tag which would skew the results.

1 2 3 4 5 6 7 8 def get_hashtag_counter(tips): blob = ' '.join(t.text.lower() for t in tips) cnt = Counter(TAG.findall(blob)) if EXCLUDE_PYTHON_HASHTAG: cnt.pop('python', None) return cnt

Finally, the import_* functions in tasks/import_tweets.py do the actual import of the tweets and hashtags, calling add_* DB methods of the tips directory/package.

Make a Simple web app with Bottle

With this pre-work done, making a web app is surprisingly easy (or not so surprising if you used Flask before).

First of all meet Bottle:

Bottle is a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the Python Standard Library.

Nice. The resulting web app comprises of < 30 LOC and can be found in app.py.

For this simple app, a single method with an optional tag argument is all it takes. Similar to Flask, the routing is handled with decorators. If called with a tag it filters the tips on tag, else it shows them all. The view decorator defines the template to use. Like Flask (and Django) we return a dict for use in the template.

1 2 3 4 5 6 7 8 9 10 11 @route('/') @route('/<tag>') @view('index') def index(tag=None): tag = tag or request.query.get('tag') or None tags = get_hashtags() tips = get_tips(tag) return {'search_tag': tag or '', 'tags': tags, 'tips': tips}

As per documentation, to work with static files, you add this snippet at the top, after the imports:

1 2 3 @route('/static/<filename:path>') def send_static(filename): return static_file(filename, root='static')

Finally, we want to make sure we only run in debug mode on localhost, hence the APP_LOCATION env variable we defined in Project Setup:

1 2 3 4 if os.environ.get('APP_LOCATION') == 'heroku': run(host="0.0.0.0", port=int(os.environ.get("PORT", 5000))) else: run(host='localhost', port=8080, debug=True, reloader=True) Bottle Templates

Bottle comes with a fast, powerful and easy to learn built-in template engine called SimpleTemplate.

In the views subdirectory I defined a header.tpl, index.tpl, and footer.tpl. For the tag cloud, I used some simple inline CSS increasing tag size by count, see header.tpl:

1 2 3 % for tag in tags: <a style="font-size: {{ tag.count/10 + 1 }}em;" href="/{{ tag.name }}">#{{ tag.name }}</a>&nbsp;&nbsp; % end

In index.tpl we loop over the tips:

1 2 3 4 5 6 % for tip in tips: <div class='tip'> <pre>{{ !tip.text }}</pre> <div class="mui--text-dark-secondary"><strong>{{ tip.likes }}</strong> Likes / <strong>{{ tip.retweets }}</strong> RTs / {{ tip.created }} / <a href="https://twitter.com/python_tip/status/{{ tip.tweetid }}" target="_blank">Share</a></div> </div> % end

If you are familiar with Flask and Jinja2 this should look very familiar. Embedding Python is even easier, with less typing – (% ... vs {% ... %}).

All css, images (and JS if we’d use it) go into the static subfolder.

And that’s all there is to making a basic web app with Bottle. Once you have the data layer properly defined it’s pretty straightforward.

Add tests with pytest

Now let’s make this project a bit more robust by adding some tests. Testing the DB required a bit more digging into the pytest framework, but I ended up using the pytest.fixture decorator to set up and tear down a database with some test tweets.

Instead of calling the Twitter API, I used some static data provided in tweets.json. And, rather than using the live DB, in tips/db.py, I check if pytest is the caller (sys.argv[0]). If so, I use the test DB. I probably will refactor this, because Bottle supports working with config files.

The hashtag part was easier to test (test_get_hashtag_counter) because I could just add some hashtags to a multiline string. No fixtures needed.

Code quality matters – Better Code Hub

Better Code Hub guides you in writing, well, better code. Before writing the tests the project scored a 7:

Not bad, but we can do better:

  1. I bumped it to a 9 by making the code more modular, taking the DB logic out of the app.py (web app), putting it in the tips folder/ package (refactorings 1 and 2)

  2. Then with the tests in place the project scored a 10:

Conclusion and Learning

Our Code Challenge #40 offered some good practice:

  1. I built a useful app which can be expanded (I want to add an API).
  2. I used some cool modules worth exploring: Tweepy, SQLAlchemy, and Bottle.
  3. I learned some more pytest because I needed fixtures to test interaction with the DB.
  4. Above all, having to make the code testable, the app became more modular which made it easier to maintain. Better Code Hub was of great help in this process.
  5. I deployed the app to Heroku using our step-by-step guide.
We Challenge You

The best way to learn and improve your coding skills is to practice. At PyBites we solidified this concept by organizing Python code challenges. Check out our growing collection, fork the repo, and get coding!

Let us know if you build something cool by making a Pull Request of your work. We have seen folks really stretching themselves through these challenges, and so did we.

Happy coding!

Contact Info

I am Bob Belderbos from PyBites, you can reach out to me by:

Categories: FLOSS Project Planets

Possbility and Probability: Example of great documentation in Python

Planet Python - Mon, 2017-12-11 09:04

Documentation is one of those tasks and programming that does not get as much attention as it probably should. Great documentation is even more rare. We live in an age of unlimited free information in the form of blog posts … Continue reading →

The post Example of great documentation in Python appeared first on Possibility and Probability.

Categories: FLOSS Project Planets

Doug Hellmann: subprocess — Spawning Additional Processes — PyMOTW 3

Planet Python - Mon, 2017-12-11 09:00
The subprocess module supports three APIs for working with processes. The run() function, added in Python 3.5, is a high-level API for running a process and optionally collecting its output. The functions call() , check_call() , and check_output() are the former high-level API, carried over from Python 2. They are still supported and widely used …
Categories: FLOSS Project Planets

PyCharm: Developing in a VM with Vagrant and Ansible

Planet Python - Mon, 2017-12-11 08:56

One of the things that could make developing cloud applications hard, would be differences between the dev environment and the production environment. This is why one of the factors of the twelve factor app is maintaining dev-prod parity. Today we’ll start a blog series about developing cloud applications, and we’ll discuss how to set up a local development environment using Vagrant.

We’ll use these technologies for this application:

  • Vagrant
  • Ansible
  • Flask
  • Virtualenv
  • Ubuntu

Today we’ll just create a simple Flask application that’ll say ‘Hello world’. In the next post in this series, we’ll introduce a larger application that we’ll deploy to AWS in a future post.

If you want to follow along at home, you can find the code from today’s blog post on GitHub. See the commit history there to see the progress from the beginning to the end.

Getting Started

So let’s create a project, and get started. If you want to follow along, you’ll need to have Vagrant, Virtualbox, and PyCharm Professional Edition installed on your computer.

Open PyCharm, and create a new pure Python project.

The first step will be to set up the Vagrant VM, and configure the necessary items. In the project folder, run vagrant init -m bento/ubuntu-16.04. You can run commands within PyCharm by opening the terminal (Alt + F12).

This generates a Vagrantfile that only contains the base box that we’re using. If we run vagrant up at this point, we’d get a plain Ubuntu server box. For our project we’ll need to install some things and expose some ports though, so let’s add this to the Vagrantfile:

Vagrant.configure("2") do |config| config.vm.box = "bento/ubuntu-16.04" config.vm.network "forwarded_port", guest: 5000, host: 5000 config.vm.provision "ansible_local" do |a| a.playbook = "setup.yml" end end

The ansible_local provisioner will install Ansible on the Ubuntu VM and then run it there, this means we don’t need to install Ansible on our host computer. Ansible lets us describe the desired state for a computer, and will then make the necessary changes to achieve that state. So let’s have a look at what’s necessary to install Python 3.6 on the VM.

Provisioning a VM with Ansible

Ansible works with Playbooks. These are YAML files that describe what state should be applied to what machines. Let’s create setup.yml, and try to install Python 3.6:

--- - hosts: all become: yes # This means that all tasks will be executed with sudo tasks: - name: Install Python 3.6 apt: name: python3.6 state: present update_cache: yes

A playbook is a list of plays on the top level. We can configure per play which hosts we want to apply it to, whether we need to become another user, and a list of tasks. In our example, we apply the play to all hosts: there’s only one host in the Vagrant setup, so that’s easy enough. We also set become to yes, which has the effect of running our tasks with sudo.

The tasks are the way we can configure the desired state of our VM. We can name our tasks to make it easier for us to see what’s going on, but Ansible doesn’t technically need it. The task we have here is just an instruction for Ansible to use the apt module, which is bundled with Ansible. We specify three options to the apt module:

  • The name of the package we’re interested in
  • The state we’d like the package to be in: present on the machine
  • Update the apt cache before installing

This last option basically means that Ansible will run apt update before running apt install, if necessary.

If you’re thinking, isn’t this just a very hard way to write sudo apt update && sudo apt install python3.6, at this point you’re right. However, the value of Ansible is that you’re not describing actions, but you’re describing a desired state. So the second time you run Ansible, it detects Python 3.6 is already installed, and it won’t do anything. Idempotence is one of Ansible’s core principles. Another key benefit is that you can version control changes to server configuration.

So let’s run vagrant up (Ctrl+Shift+A to Find action, and then type vagrant up), and we should have a VM with Python 3.6!

Trouble in Paradise

TASK [Install Python 3.6] ****************************************************** fatal: [default]: FAILED! => {"changed": false, "msg": "No package matching 'python3.6' is available"}        to retry, use: --limit @/vagrant/setup.retry

Unfortunately, Python 3.6 isn’t available from Ubuntu’s default package repositories. There are several ways to resolve this situation, the easiest would be to find a PPA (Personal Package Archive) which has Python 3.6.

A PPA which is mentioned in many places on the internet is Jonathon F’s PPA. So how would we go about adding this PPA using Ansible? Turns out there are two modules that can help us out here, apt_key and apt_repository. Apt_key allows us to specify the public key associated with the repository, to make sure any releases we get are really from Jonathon. And apt_repository then adds the repository to the apt configuration. So let’s add these two tasks to the playbook, before the install task (Ansible runs tasks in the order specified):

- name: Add key for jonathonf PPA apt_key: keyserver: keyserver.ubuntu.com id: 4AB0F789CBA31744CC7DA76A8CF63AD3F06FC659 state: present - name: Add jonathonf PPA apt_repository: repo: deb http://ppa.launchpad.net/jonathonf/python-3.6/ubuntu xenial main state: present

Now run vagrant provision (or Tools | Vagrant | Provision), to rerun the playbook. After completing, we should see the summary:

PLAY RECAP ********************************************************************* default : ok=4 changed=3 unreachable=0 failed=0

At this point, let’s create a requirements.txt with the libraries we’ll use today, in this case, just Flask:

Flask==0.12

Most Linux distributions use the system interpreter themselves, that’s one of the reasons for virtualenvs being best practice. So let’s create a virtualenv, and then install these packages. As the python-3.6 package didn’t include pip, we’ll first need to install pip. Then, using pip, we’ll need to install virtualenv into the system interpreter. After that we’ll be able to create a new virtualenv with the requirements we specify. To do this, specify at the end of the playbook:

- name: Install pip3 apt: name: python3-pip state: present update_cache: yes - name: Install 'virtualenv' package pip: name: virtualenv executable: pip3 - name: Create virtualenv become: no pip: virtualenv: "/home/vagrant/venv" virtualenv_python: python3.6 requirements: "/vagrant/requirements.txt"

First, we’re using the apt module to install pip. Then, we’re using Ansible’s pip module to install the virtualenv package. And finally we’re using the pip module again to now create the virtualenv, and then install the packages in the newly created virtualenv. Vagrant automatically mounts the project directory in the /vagrant folder in the VM, so we can refer to our requirements.txt file this way.

At this point we have our Python environment ready, and we could continue going the same way to add a database and anything else we might desire. Let’s have a look to see how we can organize our playbook further. Firstly, we’ve now hardcoded paths with ‘vagrant’, which prevents us from reusing the same playbook later on AWS. Let’s change this:

--- - hosts: all become: yes # This means that all tasks will be executed with sudo vars: venv_path: "/home/vagrant/venv" requirements_path: "/vagrant/requirements.txt" tasks: … snip … - name: Create virtualenv become: no pip: virtualenv: "{{ venv_path }}" virtualenv_python: python3.6 requirements: "{{ requirements_path }}"

The first thing we can do is define variables for these paths. If the variable syntax looks familiar, that’s because it is: Ansible is written in Python, and uses jinja2 for templating.

If we were to add database plays to the same playbook, we’re mixing things that we may want to separate later. Wouldn’t it be easier to have these Python plays somewhere we can call them, and have the database plays in another place? This is possible using Ansible roles. Let’s refactor this playbook into a Python role.

Ansible roles are essentially a folder structure with YAML files that are used to specify the things necessary for the role. To refactor our plays into a Python role, we just need to create several folders: $PROJECT_HOME/roles/python/tasks, and then place a file called main.yml in that last tasks folder. Copy the list of tasks from our playbook into that file, making sure to unindent them:

- name: Add key for jonathanf PPA apt_key: keyserver: keyserver.ubuntu.com id: 4AB0F789CBA31744CC7DA76A8CF63AD3F06FC659 state: present ... etc ...

Afterwards, specify in the playbook which role to apply:

--- - hosts: all become: yes # This means that all tasks will be executed with sudo vars: venv_path: "/home/vagrant/venv" requirements_path: "/vagrant/requirements.txt" roles: - {role: python}

That’s all there’s to it! To make sure everything runs smoothly still, run vagrant provision once more to make sure everything is applied to the VM.

Running Code from PyCharm

Now that we have a provisioned VM ready to go, let’s write some code!

First let’s set up the Python interpreter. Go to File | Settings | Project Interpreter. Then use the gear icon to select ‘Add Remote’, and choose Vagrant. PyCharm automatically detects most settings, we just need to put the path to the Python interpreter to tell PyCharm about the virtualenv we created:

 

Now create a new script, let’s name it server.py and add Flask’s Hello World:

from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == '__main__': app.run(host='0.0.0.0', debug=True)

Make sure that you use the host='0.0.0.0' kwarg, as Flask by default only binds to localhost, and we wouldn’t be able to access our application later.

Now to create a run configuration, just navigate to the script as usual, and select ‘Single instance only’ to prevent the app not starting when the port is already in use:

By marking the run configuration as ‘single instance only’ we make sure that we can’t accidentally start the script twice and get a ‘Port already in use’ error.

After saving the run configuration, just click the regular Run or Debug button, and the script should start.

That’s it for today! Stay tuned for the next blog post where we’ll have a look at an application where we build a REST API on top of a database.

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Anthony Tuininga

Planet Python - Mon, 2017-12-11 08:30

This week we welcome Anthony Tuininga as our PyDev of the Week! Anthony is the creator of the cx_Freeze library among several others in the cx Suite.  You can get a feel for what he’s currently working on over on Github. Let’s take some time to get to know Anthony better!

Can you tell us a little about yourself (hobbies, education, etc):

I grew up in a small town in the central interior of British Columbia, Canada. In spite of it being a small town, my school managed to acquire a personal computer shortly after they were made available. I was fascinated and quickly became the school guru. That experience convinced me that computers and especially programming were in my future. I moved to Edmonton, Alberta, Canada in order to attend university and ended up staying there permanently. Instead of only taking computing science courses I ended up combining them with engineering and received a computer engineering degree. After university I first worked for a small consulting firm, then for a large consulting firm and am now working for the software company, Oracle, in the database group. Besides working with computers I enjoy reading and both cross-country and downhill skiing.

Why did you start using Python?

In the late 1990’s I had developed a C++ library and set of tools to manage Oracle database objects. These worked reasonably well but they took a fair amount of time to both develop and maintain. I discovered Python and its C API and did some experimentation with what eventually became the cx_Oracle Python module. Within a few days I had sufficiently rewritten the C++ library and a couple of the tools using Python and cx_Oracle to prove to myself that Python was an excellent choice. In spite of being interpreted and theoretically slower than C++, the tools I had written in Python were actually faster, primarily due to the fact that I could use more advanced data manipulation techniques in Python with little to no effort compared to C++. I completed the rewrite in Python and continued to expand my use of Python to the point where the flagship product of the company I was working for used it extensively. Thankfully the companies I worked for saw the benefits of the open source model and I was able to make the libraries and tools I developed there available as open source. These include cx_PyGenLib, cx_PyOracleLib, cx_Oracle, cx_OracleTools, cx_OracleDBATools, cx_bsdiff, cx_Logging, ceODBC and cx_Freeze.

What other programming languages do you know and which is your favorite?

I know quite a number of languages to a limited extent as I enjoy experimenting with languages, but the languages I have used regularly over my career are C, C++, SQL, PL/SQL, HTML, JavaScript and Python. Of those, Python is my favorite. I have recently begun experimenting with Go and as a C/C++ replacement it has been a breath of fresh air. Time will tell whether I can find a good use for it, particularly since my current job requires the extensive use of C and that is unlikely to change soon.

What projects are you working on now?

During work hours I am working on a C wrapper for the Oracle Call Interface API called ODPI-C (https://github.com/oracle/odpi), cx_Oracle (https://github.com/oracle/python-cx_Oracle) and node-oracledb (https://github.com/oracle/node-oracledb). Outside of work hours I still do a bit of work on cx_Freeze (https://github.com/anthony-tuininga/cx_Freeze).

Which Python libraries are your favorite (core or 3rd party)?

The modules I have found to be the most useful in my work have been reportlab (cross platform tool for creating PDFs programmatically), xlsxwriter (cross platform tool for creating Excel documents without requiring Excel itself) and wxPython (cross platform GUI toolkit). I have also recently been making use of the virtues of the venv module (earlier known as virtualenv) and have found it to be excellent for testing. What was your motivation for creating the cx_Freeze package? As mentioned earlier I had built a number of tools for managing Oracle database objects. I wanted to distribute these to others without requiring them to install Python itself. I first experimented with the freeze tool that comes with Python itself and found that it worked but wasn’t easy to use or create executables. I discovered py2exe but it was only developed for Windows and we had Linux machines on which we wanted to run these tools. So I built cx_Freeze and it worked well enough that I was able to easily distribute my tools and later full applications on both Windows and Linux and (with some help from the community) macOS. My current job doesn’t require this capability so I have not been able to spend as much time on it as I did before.

 

What are the top three things that you have learned while maintaining this project?

These lessons have been learned not just with cx_Freeze but also with cx_Oracle, the other well-used module I originally developed. First, code you write that works well for you will break when other people get their hands on it! Everyone thinks differently and makes mistakes differently and that becomes obvious very quickly. Second, although well-written code is the most important aspect of a project (keep in mind lesson #1), documentation, samples and test cases are nearly as important and take almost as much time to do well, and without them others will find your project considerably more difficult to use. Finally, even though additional people bring additional and possibly conflicting ideas, the project is considerably stronger and useful the more contributors there are.

Is there anything else you’d like to say?

I can’t think of anything right now! Thanks for doing the interview!

Categories: FLOSS Project Planets

Wouter Verhelst: Systemd, Devuan, and Debian

Planet Debian - Mon, 2017-12-11 08:00

Somebody recently pointed me towards a blog post by a small business owner who proclaimed to the world that using Devuan (and not Debian) is better, because it's cheaper.

Hrm.

Looking at creating Devuan, which means splitting of Debian, economically, you caused approximately infinite cost.

Well, no. I'm immensely grateful to the Devuan developers, because when they announced their fork, all the complaints about systemd on the debian-devel mailinglist ceased to exist. Rather than a cost, that was an immensely gratifying experience, and it made sure that I started reading the debian-devel mailinglist again, which I had stopped for a while before that. Meanwhile, life in Debian went on as it always has.

Debian values choice. Fedora may not be about choice, but Debian is. If there are two ways of doing something, Debian will include all four. If you want to run a Linux system, and you're not sure whether to use systemd, upstart, or something else, then Debian is for you! (well, except if you want to use upstart, which is in jessie but not in stretch). Debian defaults to using systemd, but it doesn't enforce it; and while it may require a bit of manual handholding to make sure that systemd never ever ever ends up on your system, this is essentially not difficult.

you@your-machine:~$ apt install equivs; equivs-control your-sanity; $EDITOR your-sanity

Now make sure that what you get looks something like this (ignoring comments):

Section: misc Priority: standard Standards-Version: <whatever was there> Package: your-sanity Essential: yes Conflicts: systemd-sysv Description: Make sure this system does not install what I don't want The packages in the Conflicts: header cannot be installed without very difficult steps, and apt will never offer to install them.

Install it on every system where you don't want to run systemd. You're done, you'll never run systemd. Well, except if someone types the literal phrase "Yes, do as I say!", including punctuation and everything, when asked to do so. If you do that, well, you get to keep both pieces. Also, did you see my pun there? Yes, it's a bit silly, I admit it.

But before you take that step, consider this.

Four years ago, I was an outspoken opponent of systemd. It was a bad idea, I thought. It is not portable. It will cause the death of Debian GNU/kFreeBSD, and a few other things. It is difficult to understand and debug. It comes with a truckload of other things that want to replace the universe. Most of all, their developers had a pretty bad reputation of being, pardon my French, arrogant assholes.

Then, the systemd maintainers filed bug 796633, asking me to provide a systemd unit for nbd-client, since it provided an rcS init script (which is really a very special case), and the compatibility support for that in systemd was complicated and support for it would be removed from the systemd side. Additionally, providing a systemd template unit would make the systemd nbd experience much better, without dropping support for other init systems (those cases can still use the init script). In order to develop that, I needed a system to test things on. Since I usually test things on my laptop, I installed systemd on my laptop. The intent was to remove it afterwards. However, for various reasons, that never happened, and I still run systemd as my pid1. Here's why:

  • Systemd is much faster. Where my laptop previously took 30 to 45 seconds to boot using sysvinit, it takes less than five. In fact, it took longer for it to do the POST than it took for the system to boot from the time the kernel was loaded. I changed the grub timeout from the default of five seconds to something more reasonable, because I found that five seconds was just ridiculously long if it takes about half that for the rest of the system to boot to a login prompt afterwards.
  • Systemd is much more reliable. That is, it will fail more often, but it will reliably fail. When it fails, it will tell you why it failed, so you can figure out what went wrong and fix it, making sure the system never fails again in the same fashion. The unfortunate fact of the matter is that there were many bugs in our init scripts, but they were never discovered and therefore lingered. For instance, you would not know about this race condition between two init scripts, because sysvinit is so dog slow that 99 times out of 100 it would not trigger, and therefore you don't see it. The one time you do see it, something didn't come up, but sysvinit doesn't log about such errors (it expects the init script to do so), so all you can do is go "damn, wtf happened?!?" and manually start things, allowing the bug to remain. These race conditions were much more likely to trigger with systemd, which caused it a lot of grief originally; but really, you should be thankful, because now that all these race conditions have been discovered by way of an init system that is much more verbose about such problems, they have also been fixed, and your sysvinit system is more reliable, too, as a result. There are other similar issues (dependency loops, to name one) that systemd helped fix.
  • Systemd is different, and that requires some re-schooling. When I first moved my laptop to systemd, I remember running into some kind of issue that I couldn't figure out how to fix. No, I don't remember the specifics of that issue, but they don't really matter. The point is this: at first, I thought "this is horrible, you can't debug it, how can you use such a system". And while it's true that undebuggable systems are not very useful, the systemd maintainers know this too, and therefore systemd is debuggable. It's just that you don't debug it by throwing some imperative init script code through a debugger (or, worse, something like sh -x), because there is no imperative init script code to throw through such a debugger, and therefore that makes little sense. Instead, there is a wealth of different tools to inspect the systemd state, and a lot of documentation on what the different things mean. It takes a while to internalize all that; and if you're not convinced that systemd is a good thing then it may mean some cursing while you're fighting your way through. But in the end, systemd is not more difficult to debug than simple init scripts -- in fact, it sometimes may be easier, because the system is easier to reason about.
  • While systemd comes with a truckload of extra daemons (systemd-networkd, systemd-resolved, systemd-hostnamed, etc etc etc), the systemd in their name do not imply that they are required by systemd. In fact, it's the other way around: you are required to run systemd if you want to run systemd-networkd (etc), because systemd-networkd (etc) make extensive use of the systemd infrastructure and public APIs; but nothing inside systemd requires that systemd-networkd (etc) are running. In fact, on my personal laptop, beyond systemd and udev themselves, I'm not using anything that gets built from the systemd source.

I'm not saying these reasons are universally true, and I'm not saying that you'll like systemd as much as I have. I am saying, however, that you should give it an honest attempt before you say "I'm not going to run systemd, ever," because you might be surprised by the huge gap of difference between what you expected and what you got. I know I was.

So, given all that, do I think that Devuan is a good idea? It is if you want flamewars. It gives those people who want vilify systemd a place to do that without bothering Debian with their opinion. But beyond that, if you want to run Debian and you don't want to run systemd, you can! Just make sure you choose the right options, and you're done.

All that makes me wonder why today, almost half a year after the initial release of Debian 9.0 "Stretch", Devuan Ascii still hasn't released, and why it took them over two years to release their Devuan Jessie based on Debian Jessie. But maybe that's just me.

Categories: FLOSS Project Planets

PyCharm: PyCharm 2017.3.1 RC

Planet Python - Mon, 2017-12-11 05:39

We have a couple of fixes and small improvements for you in PyCharm 2017.3.1, if you’d like to already try them, you can now get the release candidate from the confluence page.

New in this version:

  • Several issues with running Python modules (-m) were resolved: running modules remotely, showing command line after running
  • Further issues with running code over SSH were resolved: they can now connect to IPv6 hosts from macOS, don’t misinterpret ProxyCommand: none, and correctly parse the HostKeyAlgorithms option (Connecting to a Python interpreter over SSH is only supported in PyCharm Professional Edition)
  • Code insight for SQLAlchemy was improved, the issue with ‘incorrect call arguments’ has been fixed.

To try this now, get the RC from confluence. You can also update from within PyCharm, just make sure that your update channel is set to ‘EAP’ or ‘RC’ (in Help | Check for Updates).

If you use multiple JetBrains applications, you can use JetBrains Toolbox to make sure all your JetBrains IDE’s stay up to date. PyCharm is also available as a snap package. If you’re on Ubuntu 16.04 or later, you can install PyCharm by using this command:

sudo snap install [pycharm-professional|pycharm-community] --classic --candidate

Categories: FLOSS Project Planets

Interview with Rytelier

Planet KDE - Mon, 2017-12-11 02:58
Could you tell us something about yourself?

I’m Rytelier, a digital artist. I’ve had an interest in creating art for a few years, I mainly want to visualize my original world.

Do you paint professionally, as a hobby artist, or both?

Currently I do only personal work, but I will look for some freelance job in the future.

What genre(s) do you work in?

I work mainly in science fiction – I’m creating an original world. I like to try various things, from creatures to landscapes and architecture. There are so many things to design in this world.

Whose work inspires you most — who are your role models as an artist?

It’s hard to point out certain artists, there are so many. Mainly I get inspired by fantasy art from the internet, I explore various websites to find interesting art.

I recommend looking at my favourites gallery, there are many works that inspire me.

How and when did you get to try digital painting for the first time?

It was years ago, I’ve got interested in the subject after I saw other people’s work. It was obviously confusing, how to place strokes, how to mix colors, and I had to get used to not looking at my hand when doing something on the tablet.

What makes you choose digital over traditional painting?

I like the freedom and flexibility that digital art gives. I can create a variety of textures, find colors more easily and fix mistakes.

How did you find out about Krita?

I saw a news item about Krita on some website related to digital art and decided to try it.

What was your first impression?

I liked how many interesting brushes there were. As time went on I discovered more useful features. It was surprising to find out that some functions aren’t available in Photoshop.

What do you love about Krita?

It has many useful functions and very high user convenience. I love the brush editor – it’s clean and simple to understand, but powerful. The dynamics curve adjustment is useful, the size dependent brush with sunken curve allows me to paint fur and grass more easily.

Also different functional brush engines. Color smudge is nice for more traditional work, like mixing wet paint. Shape brush is like a lasso, but better because it shows the shape instantly, without having to use the fill tool. Filter brush is nice too, I mainly use it as sharpen and customizable burn/dodge. There are also ways to color line art quickly. For a free program that functionality is amazing — it would be amazing even for a paid program! I like this software much more than Photoshop.

What do you think needs improvement in Krita? Is there anything that really annoys you?

The performance is the thing I most want to see improved for painting and filters. I’m happy to see multi-threaded brushes in the 4.0 version. Also I would like more dynamic preview on applying filters like the gradient map, where it updates instantly when moving the color on the color wheel. It annoys me that large brush files (brushes with big textures) don’t load, I
have to optimize my textures by reducing the size so the brush can load.

What sets Krita apart from the other tools that you use?

The amount of convenience is very high compared to other programs. The amount of “this one should be designed in a better way, it annoys me” things is the smallest of all the programs I use, and if something is broken, then most of these functions are announced to improve in 4.0.

If you had to pick one favourite of all your work done in Krita so far,  what would it be, and why?

It’s hard to pick a favourite. I think this, because I challenged myself in this picture and they are my original character, which I like a lot.

What techniques and brushes did you use in it?

I use brushes that I’ve created myself from resources found on the internet and pictures scanned by myself. I like to use slightly different ways of painting in every artwork, still looking for techniques that suit me best. Generally I start from sketch, then paint splatter going all over the canvas, then adding blurry forms, then adding details. Starting from soft edges allows me to find good colors more easily.

Where can people see more of your work?

https://rytelierart.deviantart.com/
I will open galleries in other sites in the future.

Anything else you’d like to share?

I hope that Krita will get more exposure and more people, including professionals, will use it and will donate to its development team instead of buying expensive digital art programs. Open source software is having a great time, more and more tools are being created that replace these expensive ones in various categories.

Categories: FLOSS Project Planets

Full Stack Python: GitPython and New Git Tutorials

Planet Python - Mon, 2017-12-11 00:00

First Steps with GitPython is a quick tutorial that shows how to get started using the awesome GitPython library for programmatically interacting with Git repositories in your Python applications. In the spirit of the thank you maintainers issue ticket I wrote about last newsletter, I opened a quick "non-issue" ticket for the GitPython developers to thank them. Give them a thank you +1 if you've used the project and also found it useful.

The Git page on Full Stack Python has also just been updated with new resources. A few of my favorite new tutorials list on the Git page are:

I also split out the Git page resources into beginner, more advanced, specific use case and workflow sections so it's easier to parse based on whether you're a Git veteran or still up-and-coming in that area of your development skills.

Got questions or comments about Full Stack Python? Send me an email or submit an issue ticket on GitHub to let me know how to improve the site as I continue to fill in the table of contents with new pages and new tutorials.

Categories: FLOSS Project Planets

Codementor: PCA using Python (scikit-learn, pandas)

Planet Python - Sun, 2017-12-10 22:19
To understand the value of using PCA for data visualization, the first part of this tutorial post goes over a basic visualization of the IRIS dataset after applying PCA. The second part uses PCA to speed up a machine learning algorithm (logistic regression) on the MNIST dataset.
Categories: FLOSS Project Planets

François Marier: Using all of the 5 GHz WiFi frequencies in a Gargoyle Router

Planet Debian - Sun, 2017-12-10 21:03

WiFi in the 2.4 GHz range is usually fairly congested in urban environments. The 5 GHz band used to be better, but an increasing number of routers now support it and so it has become fairly busy as well. It turns out that there are a number of channels on that band that nobody appears to be using despite being legal in my region.

Why are the middle channels unused?

I'm not entirely sure why these channels are completely empty in my area, but I would speculate that access point manufacturers don't want to deal with the extra complexity of the middle channels. Indeed these channels are not entirely unlicensed. They are also used by weather radars, for example. If you look at the regulatory rules that ship with your OS:

$ iw reg get global country CA: DFS-FCC (2402 - 2472 @ 40), (N/A, 30), (N/A) (5170 - 5250 @ 80), (N/A, 17), (N/A), AUTO-BW (5250 - 5330 @ 80), (N/A, 24), (0 ms), DFS, AUTO-BW (5490 - 5600 @ 80), (N/A, 24), (0 ms), DFS (5650 - 5730 @ 80), (N/A, 24), (0 ms), DFS (5735 - 5835 @ 80), (N/A, 30), (N/A)

you will see that these channels are flagged with "DFS". That stands for Dynamic Frequency Selection and it means that WiFi equipment needs to be able to detect when the frequency is used by radars (by detecting their pulses) and automaticaly switch to a different channel for a few minutes.

So an access point needs extra hardware and extra code to avoid interfering with priority users. Additionally, different channels have different bandwidth limits so that's something else to consider if you want to use 40/80 MHz at once.

Using all legal channels in Gargoyle

The first time I tried setting my access point channel to one of the middle 5 GHz channels, the SSID wouldn't show up in scans and the channel was still empty in WiFi Analyzer.

I tried changing the channel again, but this time, I ssh'd into my router and looked at the errors messages using this command:

logread -f

I found a number of errors claiming that these channels were not authorized for the "world" regulatory authority.

Because Gargoyle is based on OpenWRT, there are a lot more nnwireless configuration options available than what's exposed in the Web UI.

In this case, the solution was to explicitly set my country in the wireless options by putting:

country 'CA'

(where CA is the country code where the router is physically located) in the 5 GHz radio section of /etc/config/wireless on the router.

Then I rebooted and I was able to set the channel successfully via the Web UI.

If you are interested, there is a lot more information about how all of this works in the kernel documentation for the wireless stack.

Categories: FLOSS Project Planets

Frameworks 5.41.0 Now available in KDE neon user editions

Planet KDE - Sun, 2017-12-10 18:01

Frameworks 5.41.0 has been released!

https://www.kde.org/announcements/kde-frameworks-5.41.0.php

And now available in Neon. Enjoy!

 

Categories: FLOSS Project Planets

Spinning Code: A Process to create a Drupal 8 module’s Config

Planet Drupal - Sun, 2017-12-10 13:43

One of the best practices for Drupal 8 that is still emerging is how to create modules with complex deployable configuration. In the past we often abused the features module to do this, and while that continues to be an option, with Drupal 8’s vastly improved configuration management options and the ability to install configuration easily I have been looking for something better. I particularly want to build modules that don’t have unnecessary dependencies but I can still reliably include all the needed configuration in my project. And after a few tries I think I’ve struck on an effective process.

Let’s start with a quick refresher on installing configuration for a Drupal 8 module. During module installation Drupal will load any yaml files that match configuration patterns it already knows about that are included in your module’s config/install directory. In theory this is great but if you want to include configuration that comes with other modules you have to figure out what files are needed; if you want to include configuration from core modules you probably will need to find a fairly large collection files to get all the required elements. Finding all those files, and copying them quickly and easily is the challenge I set out to solve.

My process starts with a local development sandbox site that is just there to support this development work, and I create a local git repository for the site’s configuration (I don’t need to connect it to a remote, like Bitbucket or GitHub, or handle all of the site’s code since it’s just to support finding changes to config files). Once installation and any base configuration is complete I export the site’s config to the directory covered by the repo (here I used d8_builder/config/sync, the site itself was at d8_builder/pub), and make sure all changes in the repository are committed:

Now I create my module and a second repository just for it. The module’s repository is linked to a remote since this is the actual product I’m creating.

With that plumbing in place I can to make whatever configuration change I need included in the module. Lately I’ve been creating a custom moderation workflow with several user roles and edge cases that will need to be deployed on a dozen or so sites, so you’ll see that reflected below, but this process should work for just about any project with lots of interrelated configuration.

Once I have completed a set of changes, I export the site’s configuration again:  drupal config:export

Now git can easily show which configuration files were changed, added, or removed:

Next I use git, xargs, and cp to copy those files into your module (hat tip on this detail to Andy Gregorowicz):
git ls-files -om --exclude-standard --exclude=core.extensions.yml |  xargs -I{} cp "{}" pub/modules/custom/fancy_workflow/config/install/

Notice that I skip the core.extensions.yml file. If your module had dependencies you’ll still need to update your module’s info.yml file to list them.

These files are great except for one detail: they all start with the UUID for the sandbox site, which will cause break imports. So I hop into the module’s config/install directory and use sed to remove those lines:
sed -i '/^uuid/d' *

Now a quick commit and push of the changes to the module’s repo, and I’m ready to pull the module into other projects. I also commit the builder repo to ensure it’s easy to track any future changes.

This isn’t a replacement for tools like Configuration Installer, which are designed to handle an entire site, this is intended just for module development.

If you think you have a better solution, or that I’m missing something important please let me know.

Categories: FLOSS Project Planets

DrupalEasy: DrupalEasy Podcast 200 - Ryan's Drumroll

Planet Drupal - Sun, 2017-12-10 12:27

Direct .mp3 file download.

The three original hosts of the DrupalEasy Podcast, Andrew Riley, Ryan Price, and Mike Anello take a look back at episode 1 of the podcast, the last 9 years of Drupal, and what the next 5 years may bring.

Discussion DrupalEasy News Sponsors
  • Drupal Aid - Drupal support and maintenance services. Get unlimited support, monthly maintenance, and unlimited small jobs starting at $99/mo.
  • WebEnabled.com - devPanel.
Upcoming events Follow us on Twitter Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Categories: FLOSS Project Planets

Andrew Cater: Debian 8.10 and Debian 9.3 released - CDs and DVDs published

Planet Debian - Sun, 2017-12-10 12:27
Done a tiny bit of testing for this: Sledge and RattusRattus and others have done far more.

Always makes me feel good: always makes me feel as if Debian is progressing - and I'm always amazed I can persuade my oldest 32 bit laptop to work :)
Categories: FLOSS Project Planets

Marius Gedminas: Switching to HTTPS

Planet Python - Sun, 2017-12-10 07:26

It’s 2017, so it’s time to make this blog HTTPS-only, with permanent redirects and HSTS headers and everything.

Apologies to any planets that might get flooded when the RSS feed changes all permalinks to https.

(P.S. Hugo is a pain, as I expected. Content-Security-Policy headers are also made of pain, so I’m skipping one for this blog right now.)

Categories: FLOSS Project Planets

Testing a switch to default Breeze-Dark Plasma theme in Bionic daily isos and default settings

Planet KDE - Sun, 2017-12-10 05:43

Today’s daily ISO for Bionic Beaver 18.04 sees an experimental switch to the Breeze-Dark Plasma theme by default.

Users running 18.04 development version who have not deliberately opted to use Breeze/Breeze-Light in their systemsettings will also see the change after upgrading packages.

Users can easily revert back to the Breeze/Breeze-Light Plasma themes by changing this in systemsettings.

Feedback on this change will be very welcome:

You can reach us on the Kubuntu IRC channel or Telegram group, on our user mailing list, or post feedback on the (unofficial) Kubuntu web forums

Thank you to Michael Tunnell from TuxDigital.com for kindly suggesting this change.

Categories: FLOSS Project Planets
Syndicate content