First month report: my feelings about gsoc

Planet KDE - Mon, 2017-05-22 04:12

Hi, I'm Davide and I'm 22.
I was born on May 17th so I'm considering being accepted by KDE as a little gift.
The first month is usually related to "Community Bonding". What does it mean?

First of all, I created this blog. Here I'll post updates about Chat Bridge (now renamed to Brooklyn) and myself.
Then, I retrieved my KDE Identity account. The main problem was that I've lost my username.
So I wrote to sysadmin@kde.org and five minutes after the username was no longer a problem.
Shortly after I've done a lot of stuff, but I don't want to bother my readers.

After this boring to-do list, I've contacted my mentor to keep him updated.
We decided to start the development of the application and we defined how the app configuration file should be.
It is obviously open-source, you can use it for your projects! For now, it works only on IRC/Telegram but it will support soon also Rocketchat.It can also only support plain text, but it's temporary, don't worry.

I'm planning (but I've not decided yet because of university exams) to go to Akademy 2017 with some guys at WikiToLearn.
I can't wait to start coding!

What do you think about this project?
Do you have plans to use it?
Don't be shy, write me everything you want!

External links:

Categories: FLOSS Project Planets

Code Positive: Rich Snippets & Structured data

Planet Drupal - Mon, 2017-05-22 04:05

The benefits of Rich snippets and how to implement structured data in Drupal 8 to enhance the way your pages are listed by search engines.



Categories: FLOSS Project Planets

Gocept Weblog: See you on PyConWeb in Munich?

Planet Python - Mon, 2017-05-22 03:54

The gocept team will join PyConWeb 2017 in Munich from 27th to 28th of May – hey, this is is less than one week from now! It seems that there are still tickets available.

I myself will present RestrictedPython – or how to port to Python 3 without porting dependencies on Saturday at 3 p. m.

See you in Munich!

Categories: FLOSS Project Planets

Catalin George Festila: Make one executable from a python script.

Planet Python - Mon, 2017-05-22 02:20
The official website of this tool told us:
PyInstaller bundles a Python application and all its dependencies into a single package. The user can run the packaged app without installing a Python interpreter or any modules. PyInstaller supports Python 2.7 and Python 3.3+, and correctly bundles the major Python packages such as numpy, PyQt, Django, wxPython, and others.

PyInstaller is tested against Windows, Mac OS X, and Linux. However, it is not a cross-compiler: to make a Windows app you run PyInstaller in Windows; to make a Linux app you run it in Linux, etc. PyInstaller has been used successfully with AIX, Solaris, and FreeBSD, but is not tested against them.

The manual of this tool can be see it here.
C:\Python27>cd Scripts

C:\Python27\Scripts>pip install pyinstaller
Collecting pyinstaller
Downloading PyInstaller-3.2.1.tar.bz2 (2.4MB)
100% |################################| 2.4MB 453kB/s
Collecting pypiwin32 (from pyinstaller)
Downloading pypiwin32-219-cp27-none-win32.whl (6.7MB)
100% |################################| 6.7MB 175kB/s
Successfully installed pyinstaller-3.2.1 pypiwin32-219Also this will install PyWin32 python module.
Let's make one test python script and then to make it executable.
I used this python script to test it:
from tkinter import Tk, Label, Button

class MyFirstGUI:
def __init__(self, master):
self.master = master
master.title("A simple GUI")

self.label = Label(master, text="This is our first GUI!")

self.greet_button = Button(master, text="Greet", command=self.greet)

self.close_button = Button(master, text="Close", command=master.quit)

def greet(self):

root = Tk()
my_gui = MyFirstGUI(root)
root.mainloop()The output of the command of pyinstaller:
C:\Python27\Scripts>pyinstaller.exe --onefile --windowed ..\tk_app.py
92 INFO: PyInstaller: 3.2.1
92 INFO: Python: 2.7.13
93 INFO: Platform: Windows-10-10.0.14393
93 INFO: wrote C:\Python27\Scripts\tk_app.spec
95 INFO: UPX is not available.
96 INFO: Extending PYTHONPATH with paths
['C:\\Python27', 'C:\\Python27\\Scripts']
96 INFO: checking Analysis
135 INFO: checking PYZ
151 INFO: checking PKG
151 INFO: Building because toc changed
151 INFO: Building PKG (CArchive) out00-PKG.pkg
213 INFO: Redirecting Microsoft.VC90.CRT version (9, 0, 21022, 8) -> (9, 0, 30729, 9247)
2120 INFO: Building PKG (CArchive) out00-PKG.pkg completed successfully.
2251 INFO: Bootloader c:\python27\lib\site-packages\PyInstaller\bootloader\Windows-32bit\runw.exe
2251 INFO: checking EXE
2251 INFO: Rebuilding out00-EXE.toc because tk_app.exe missing
2251 INFO: Building EXE from out00-EXE.toc
2267 INFO: Appending archive to EXE C:\Python27\Scripts\dist\tk_app.exe
2267 INFO: Building EXE from out00-EXE.toc completed successfully.Then I run the executable output:

C:\Python27\Scripts>...and working well.

The output file come with this icon:

Also you can make changes by using your icons or set the type of this file, according to VS_FIXEDFILEINFO structure.
You need to have the icon file and / or version.txt file for VS_FIXEDFILEINFO structure.
Let's see the version.txt file:
# UTF-8
# For more details about fixed file info 'ffi' see:
# http://msdn.microsoft.com/en-us/library/ms646997.aspx
# filevers and prodvers should be always a tuple with four items: (1, 2, 3, 4)
# Set not needed items to zero 0.
filevers=(2017, 1, 1, 1),
prodvers=(1, 1, 1, 1),
# Contains a bitmask that specifies the valid bits 'flags'
# Contains a bitmask that specifies the Boolean attributes of the file.
# The operating system for which this file was designed.
# 0x4 - NT and there is no need to change it.
# The general type of file.
# 0x1 - the file is an application.
# The function of the file.
# 0x0 - the function is not defined for this fileType
# Creation date and time stamp.
date=(0, 0)
[StringStruct(u'CompanyName', u'python-catalin'),
StringStruct(u'ProductName', u'test'),
StringStruct(u'ProductVersion', u'1, 1, 1, 1'),
StringStruct(u'InternalName', u'tk_app'),
StringStruct(u'OriginalFilename', u'tk_app.exe'),
StringStruct(u'FileVersion', u'2017, 1, 1, 1'),
StringStruct(u'FileDescription', u'test tk'),
StringStruct(u'LegalCopyright', u'Copyright 2017 free-tutorials.org.'),
StringStruct(u'LegalTrademarks', u'tk_app is a registered trademark of catafest.'),])
VarFileInfo([VarStruct(u'Translation', [0x409, 1200])])
)Now you can use this command for tk_app.py and version.txt files from the C:\Python27 folder:
pyinstaller.exe --onefile --windowed --version-file=..\version.txt ..\tk_app.pyLet's see this info into the executable file:

If you wand to change the icon then you need to add the --icon=tk_app.ico, where tk_app.ico is the new icon of the executable.

Categories: FLOSS Project Planets

Catalin George Festila: Updating all Python with pip on Windows OS.

Planet Python - Mon, 2017-05-22 01:35
Just use this python module named pip-review.
C:\Python27\Scripts>pip install pip-review
C:\Python27\Scripts>pip-review.exe --auto --verbose
Checking for updates of ...
Categories: FLOSS Project Planets

Catalin George Festila: The pycrypto python module - part 001.

Planet Python - Mon, 2017-05-22 00:45
This python module name pycrypto is a collection of Python Cryptography Toolkit.
This python module has been created by Andrew Kuchling and now maintained by Dwayne C. Litzenberger.
Let's install under Windows 10 OS using Command Prompt (Admin) shell.
C:\WINDOWS\system32>cd ..

C:\Windows>cd ..

C:\>cd Python27\Scripts

C:\Python27\Scripts>pip install pycrypto
Requirement already satisfied: pycrypto in c:\python27\lib\site-packagesSome info and help under python shell can be see using this:
Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:42:59) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import Crypto
>>> dir(Crypto)
['__all__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__',
'__revision__', '__version__', 'version_info']
>>> help(Crypto)
Help on package Crypto:

Crypto - Python Cryptography Toolkit


A collection of cryptographic modules implementing various algorithms
and protocols.


Secret-key (AES, DES, ARC4) and public-key encryption (RSA PKCS#1) algorithms Crypto.Hash
Hashing algorithms (MD5, SHA, HMAC)
Cryptographic protocols (Chaffing, all-or-nothing transform, key derivation
functions). This package does not contain any network protocols.
Public-key encryption and signature algorithms (RSA, DSA)
Public-key signature algorithms (RSA PKCS#1)
Various useful modules and functions (long-to-string conversion, random number
generation, number theoretic functions)

Cipher (package)
Hash (package)
Protocol (package)
PublicKey (package)
Random (package)
SelfTest (package)
Signature (package)
Util (package)

__all__ = ['Cipher', 'Hash', 'Protocol', 'PublicKey', 'Util', 'Signatu...
__revision__ = '$Id$'
__version__ = '2.6.1'

2.6.1Let's test some examples with this python module.
First example come with encrypt and decrypt message based one key.
The key also is need to be one encryption key and fix to key32.
The iv will not be specified by user, it will be generated and then encrypted with RSA.
NEVER make the IV constant and unique, it must be unique for every message.
Let's see the example source code:
from Crypto.Cipher import AES
from Crypto import Random
def encrypt(key32,message):
return msg
def decrypt(key32,msg):
return dec.decrypt(msg).decode('ascii')
if __name__=='__main__':
global iv
key32 = "".join([ ' ' if i >= len(key) else key[i] for i in range(32) ])
message='another website with free tutorials'
enc =encrypt(key32, message)
print enc
print(decrypt(key32,enc))The result output is this:
ᄚ Cᆪ゚2 ᄊÕ|ýXÍ ᄇNäÇ3ヨ゙Lマᆱuï: ù メNᄚm
ᄚ Cᆪ゚2 ᄊÕ|ýXÍ ᄇNäÇ3ヨ゙Lマᆱuï: ù メNᄚm
another website with free tutorials

Another more simplistic example:
from Crypto.Cipher import AES
from Crypto import Random
key = b'Sixteen byte key'
iv = Random.new().read(AES.block_size)
cipher = AES.new(key, AES.MODE_CFB, iv)
msg = iv + cipher.encrypt(b'Attack at dawn')See the output of variables:
>>> print key
Sixteen byte key
>>> print iv
ÔÄ▀DÒ ÕØ} m║dÕ╚\
>>> print cipher.encrypt(b'Attack at dawn')
åÌ£┴\u\ÍÈSÕ╦╔.Using MD5 example:
>>> from Crypto.Hash import MD5
>>> MD5.new('free text').hexdigest()
'be9420c1596a781119c53a9933a8234f'Using RSA key example:
>>> from Crypto.PublicKey import RSA
>>> from Crypto import Random
>>> rng = Random.new().read
>>> RSAkey = RSA.generate(1024, rng)
>>> public_key = RSAkey.publickey()
>>> print public_key
_RSAobj @0x3650b98 n(1024),e>
>>> enc_data = public_key.encrypt('test data', 32)[0]
>>> print enc_data
H +îÕÊ ÙH:?ª2S½Fã0á! f¬ = ·+,Í0r³┐o·¼ÉlWy¿6ôên(£jê¿ ╦çª|*°q Ò4ì┌çÏD¦¿╝û╠╠MY¶ïzµ>©a}hRô ]í;
_[v¸¤u:2¦y¾/ ²4R╩HvéÌ'÷Ç)KT:P _<! D
>>> dec_data = RSAkey.decrypt(enc_data)
>>> print dec_data
test data Encrypted and decrypted output texts may look different depending on how encoded the used text editor or python language.

Categories: FLOSS Project Planets

Polyglot.Ninja(): Django REST Framework: Authentication and Permissions

Planet Python - Sun, 2017-05-21 19:19

In our last post about ViewSet, ModelViewSet and Router, we saw how easily we can create REST APIs with the awesome Django REST Framework. In this blog post, we would see how we can secure our endpoints with user authentication and permissions. Authentication will help us identify which user is currently logged in and permissions will decide which user(s) can access which resources.


The idea of authentication is pretty simple. When a new incoming request comes, we have to check the request and see if we can identify any user credentials along with it. If you have read the Flask HTTP Auth tutorial or the one about JWT, you might remember how we were checking the authorization header to authenticate our users. We might also receive the user login data via a POST request (form submission) or the user may already be logged in and we can identify using the session data.

We can see that the authentication mechanism can largely vary. Django REST Framework is very flexible in accommodating them. We can give DRF a list of classes, DRF will run the authenticate method on those classes. As soon as a class successfully authenticates the user, the return values from the call is set to request.user and request.auth. If none of the classes manage to authenticate the user, then the user is set to django.contrib.auth.models.AnonymousUser .

We can set these classes using the DEFAULT_AUTHENTICATION_CLASSES settings under the DRF settings. Here’s an example:

REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.BasicAuthentication', 'rest_framework.authentication.SessionAuthentication', ) }

In the example above we used BasicAuthentication and SessionAuthentication – two of the built in classes from Django REST Framework. We will look at how they work and we will also check how we can write our own class for our custom authentication.

(PS: Here we set the authentication policy globally, for all views / paths / resources – if we want, we can also use different authentication mechanism for each one, individually but that is usually not done in most cases).

Basic Authentication

In our example before, we mentioned the BasicAuthentication class. This class first checks the http authorization header (HTTP_AUTHORIZATION in request.META ). If the header contains appropriate string (something like Basic <Base64 Encoded Login>), it will decode the string, split the username, password and try to authenticate the user.

Basic Authentication is very simple, easy to setup and might be quite convenient for testing / debugging but I would highly discourage using this method on production.

Session Authentication

If you have used Django, you already know about session based authentication. In fact, Django itself handles the session based auth and sets the user as part of the request object (an instance of HttpRequest object. DRF just reads the user data from the request and checks for CSRF. That’s it.

Session Authentication works very well if your users are interacting with your API on the web, perhaps using ajax calls? In such cases, if the user is once logged in, his/her auth is stored in the session and we can depend on those data while making requests from our web app. However, this will not work well if the client doesn’t or can not accept cookies (apps on different domains, mobile or desktop apps, other micro services etc).

Token Authentication

If you understand JWT, this one will feel similar, except in this case, the token will be just a “token”, no JSON or no signing. The user logs in and gets a token. On subsequent requests, this token must be passed as part of the authorization header.

To use token based auth, we first need to add the rest_framework.authtoken app to the INSTALLED_APPS list in your settings.py file. And then run the migration to create the related tables.

python manage.py migrate

We also need to add the TokenAuthentication class to our DRF auth class list:

REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.BasicAuthentication', 'rest_framework.authentication.SessionAuthentication', 'rest_framework.authentication.TokenAuthentication', ) }

Now let’s create a view to issue tokens to user.

from django.contrib.auth import authenticate from rest_framework.decorators import api_view from rest_framework.response import Response from rest_framework.status import HTTP_401_UNAUTHORIZED from rest_framework.authtoken.models import Token @api_view(["POST"]) def login(request): username = request.data.get("username") password = request.data.get("password") user = authenticate(username=username, password=password) if not user: return Response({"error": "Login failed"}, status=HTTP_401_UNAUTHORIZED) token, _ = Token.objects.get_or_create(user=user) return Response({"token": token.key})

The code here should be self explanatory. We take username and password. We then try to authenticate the user using Django’s default authentication (checking username and password against what’s stored in the database). If the authentication fails, we return error message along with http status code 401. If the authentication succeeds, we issue a token for the user and pass it in the response.

We need to add this view to our urlpatterns next:

url(r'^login', login)

Now let’s try it out:

$ curl --request POST \ --url http://localhost:8000/api/login \ --header 'content-type: application/json' \ --data '{"username": "test_user", "password": "awesomepwd"}' {"token":"5e2effff34c85c11a8720a597b96d73a4634c9ad"}%

So we’re getting the tokens successfully. Now to access a secured resource, we need to pass it as part of the authorization header. But how do we make a resource available only to a logged in user? Well, permissions come into play here.


While authentication tells us which user is logged in (or not), it’s our responsibility to check if the current user (a valid logged in user or a guest, not logged in visitor) has access to the resource. Permissions can help us deal with that. Just like authentication, we can also set a class of permissions globally or on each resource individually. Let’s start with the IsAuthenticated permission first. Let’s add this to our SubscriberViewSet.

from rest_framework.permissions import IsAuthenticated class SubscriberViewSet(ModelViewSet): serializer_class = SubscriberSerializer queryset = Subscriber.objects.all() permission_classes = (IsAuthenticated,)

If we try to access subscribers without any authentication, we will get an error message now:

{ "detail": "Authentication credentials were not provided." }

So let’s provide authentication using the token we got.

$ curl -H "Content-Type: application/json" -H "Authorization: Token 5e2effff34c85c11a8720a597b96d73a4634c9ad" http://localhost:8000/api/subscribers/

Now it works fine! There are many useful, already provided permission classes with Django REST Framework. You can find a list of them here http://www.django-rest-framework.org/api-guide/permissions/#api-reference.

Custom Authentication and Permissions

The authentication and permission classes which come with DRF are quite enough for many cases. But what if we needed to create our own? Let’s see how we can do that.

Writing a custom authentication class is very simple. You define your custom authenticate method which would receive the request object. You will have to return an instance of the default User model if authentication succeeds, otherwise raise an exception. You can also return an optional value for the auth object to be set on request. If our authentication method can not be used for this request, we should return None so other classes are tried.

Here’s an example from DRF docs:

from django.contrib.auth.models import User from rest_framework import authentication from rest_framework import exceptions class ExampleAuthentication(authentication.BaseAuthentication): def authenticate(self, request): username = request.META.get('X_USERNAME') if not username: return None try: user = User.objects.get(username=username) except User.DoesNotExist: raise exceptions.AuthenticationFailed('No such user') return (user, None)

In this example, the username is being retrieved from a custom header (X_USERNAME) and the rest is quite easy to understand.

Next, let’s see how we can create our custom permission class. For permissions, we can have two types of permissions – global permission or per object permission. Here’s an example of global permission from DRF docs:

from rest_framework import permissions class BlacklistPermission(permissions.BasePermission): """ Global permission check for blacklisted IPs. """ def has_permission(self, request, view): ip_addr = request.META['REMOTE_ADDR'] blacklisted = Blacklist.objects.filter(ip_addr=ip_addr).exists() return not blacklisted

If the has_permission method returns True then the user has permission, otherwise not. Let’s see the example for per object permission:

class IsOwnerOrReadOnly(permissions.BasePermission): """ Object-level permission to only allow owners of an object to edit it. Assumes the model instance has an `owner` attribute. """ def has_object_permission(self, request, view, obj): # Read permissions are allowed to any request, # so we'll always allow GET, HEAD or OPTIONS requests. if request.method in permissions.SAFE_METHODS: return True # Instance must have an attribute named `owner`. return obj.owner == request.user

For dealing with per object permission, we can override the has_object_permission method. It can take the request, the view and the obj. We have to check if the current user can access the obj in question. Just like before, we need to return True or False to allow or deny the request.

In this blog post, we learned the basics of authentication and permissions. We now know how we can secure our API endpoints with DRF. While the token based authentication was very useful, we kind of like JWT. So in our next post, we will be using a third party package to implement JWT for Django REST Framework.

The post Django REST Framework: Authentication and Permissions appeared first on Polyglot.Ninja().

Categories: FLOSS Project Planets

Ritesh Raj Sarraf: apt-offline 1.8.0 releasedI

Planet Debian - Sun, 2017-05-21 17:17

I am pleased to announce the release of apt-offline, version 1.8.0. This release is mainly a forward port of apt-offline to Python 3 and PyQt5. There are some glitches related to Python 3 and PyQt5, but overall the CLI interface works fine. Other than the porting, there's also an important bug fixed, related to memory leak when using the MIME library. And then there's some updates to the documentation (user examples) based on feedback from users.

Release is availabe from Github and Alioth




What is apt-offline ?

Description: offline APT package manager apt-offline is an Offline APT Package Manager. . apt-offline can fully update and upgrade an APT based distribution without connecting to the network, all of it transparent to APT. . apt-offline can be used to generate a signature on a machine (with no network). This signature contains all download information required for the APT database system. This signature file can be used on another machine connected to the internet (which need not be a Debian box and can even be running windows) to download the updates. The downloaded data will contain all updates in a format understood by APT and this data can be used by apt-offline to update the non-networked machine. . apt-offline can also fetch bug reports and make them available offline. Categories: Keywords: Like: 
Categories: FLOSS Project Planets

Bryan Pendleton: The Upper Rhine Valley: prelude and overture

Planet Apache - Sun, 2017-05-21 17:11

We took an altogether-too-short but thoroughly wonderful trip to the Upper Rhine Valley region of Europe. I'm not sure that "Upper Rhine Valley" is a recognized term for this region, so please forgive me if I've abused it; more technically, we visited:

  1. The Alsace region of France
  2. The Schwarzenwald region of Germany
  3. The neighboring areas of Frankfurt, Germany, and Basel, Switzerland.
But since we were at no point more than about 40 miles from the Rhine river, and since we were several hundred miles from the Rhine's mouth in the North Sea, it seems like a pretty good description to me.

Plus, it matches up quite nicely with this map.

So there you go.

Anyway, we spent 10 wonderful days there, which was hardly even close to enough, but it was what we had.

And I, in my inimitable fashion, packed about 30 days of sightseeing into those 10 days, completely exhausting my travel companions.

Once again, no surprise.

I'll have more to write about various aspects of the trip subsequently, but here let me try to crudely summarize the things that struck me about the trip.

  • Rivers are incredibly important in Europe, much more so than here in America. Rivers provide transportation, drinking water, sewage disposal, electric power, food (fish), and form the boundaries between regions and nations. They do some of these things in America, too, but we aren't nearly as attached to our rivers as they are in Central Europe, where some of the great rivers of the world arise.
  • For centuries, castles helped people keep an eye on their rivers, and make sure that their neighbors were behaving as they should in the river valleys.
  • Trains are how you go places in Europe. Yes, you can fly, or you can drive, but if you CAN take a train, you should. And, if you can take a first class ticket on TGV, you absolutely, absolutely should. I have never had a more civilized travel experience than taking the TGV from Frankfurt to Strasbourg. (Though full credit to Lufthansa for being a much-better-than-ordinary airline. If you get a chance to travel Lufthansa, do it.)
  • To a life-long inhabitant of the American West, Central Europe is odd for having almost no animals. People live in Central Europe, nowadays; animals do not. BUT: storks!
  • France, of course, is the country that perfected that most beautiful of beverages: wine. While most of the attention to wine in France goes to Southern France, don't under-rate Alsace, for they have absolutely delicious wines of many types, and have been making wine for (at least) 2,000 years. We Californians may think we know something about wine; we don't.
  • The visible history of the Upper Rhine Valley is deeply formed by the Franks. Don't try to understand the cathedrals, villages, cities, etc. without spending some time thinking about Charlemagne, etc. And, if you were like me and rather snored through this part of your schooling, prepare to have your eyes opened.
  • The other major history of the Upper Rhine Valley involves wars. My, but this part of the world has been fought over for a long time. Most recently, of course, we can distinguish these major events:
    1. The Franco-Prussian war, which unified Germany and resulted in Alsace being a German territory
    2. World War One
    3. World War Two
    Although the most recent of these events is now 75 years in the past, the centuries and centuries of conflict over who should rule these wonderful lands has left its mark, deeply.

    So often through my visit I thought to myself: "Am I in French Germany? Or perhaps is this German France?" Just trying to form and phrase these questions in my head, I realized how little I knew, and how much there is to learn, about how people form their bonds with their land, and their neighbors, and their thoughts. Language, food, customs, politics, literature: it's all complex and it's all one beautiful whole.

    This, after all, is the land where Johannes Gutenberg invented the printing press, where people like Johann Wolfgang von Goethe, Louis Pasteur, John Calvin, and Albert Schweitzer lived and did their greatest work.

I could, of course, have been much terser:

  1. The Upper Rhine Valley is one of the most beautiful places on the planet. The people who live there are very warm and welcoming, and it is a delightful place to take a vacation
  2. Early May is an absolutely superb time to go there.

I'll write more later, as I find time.

Categories: FLOSS Project Planets

Holger Levsen: 20170521-this-time-of-the-year

Planet Debian - Sun, 2017-05-21 14:26
It's this time of the year again…

So it seems summer has finally arrived here and for the first time this year I've been offline for more than 24h, even despite having wireless network coverage. The lake, the people, the bonfire, the music, the mosquitos and the fireworks at 3.30 in the morning were totally worth it!

Categories: FLOSS Project Planets

PyBites: Twitter digest 2017 week 20

Planet Python - Sun, 2017-05-21 13:59

Every weekend we share a curated list of 15 cool things (mostly Python) that we found / tweeted throughout the week.

Categories: FLOSS Project Planets

Russ Allbery: Review: Sector General

Planet Debian - Sun, 2017-05-21 13:21

Review: Sector General, by James White

Series: Sector General #5 Publisher: Orb Copyright: 1983 Printing: 2002 ISBN: 0-312-87770-6 Format: Trade paperback Pages: 187

Sector General is the fifth book (or, probably more accurately, collection) in the Sector General series. I blame the original publishers for the confusion. The publication information is for the Alien Emergencies omnibus, which includes the fourth through the sixth books in the series.

Looking back on my previous reviews of this series (wow, it's been eight years since I read the last one?), I see I was reviewing them as novels rather than as short story collections. In retrospect, that was a mistake, since they're composed of clearly stand-alone stories with a very loose arc. I'm not going to go back and re-read the earlier collections to give them proper per-story reviews, but may as well do this properly here.

Overall, this collection is more of the same, so if that's what you want, there won't be any negative surprises. It's another four engineer-with-a-wrench stories about biological and medical puzzles, with only a tiny bit of characterization and little hint to any personal life for any of the characters outside of the job. Some stories are forgettable, but White does create some memorable aliens. Sadly, the stories don't take us to the point of real communication, so those aliens stop at biological puzzles and guesswork. "Combined Operation" is probably the best, although "Accident" is the most philosophical and an interesting look at the founding principle of Sector General.

"Accident": MacEwan and Grawlya-Ki are human and alien brought together by a tragic war, and forever linked by a rather bizarre war monument. (It's a very neat SF concept, although the implications and undiscussed consequences don't bear thinking about too deeply.) The result of that war was a general recognition that such things should not be allowed to happen again, and it brought about a new, deep commitment to inter-species tolerance and politeness. Which is, in a rather fascinating philosophical twist, exactly what MacEwan and Grawlya-Ki are fighting against: not the lack of aggression, which they completely agree with, but with the layers of politeness that result in every species treating all others as if they were eggshells. Their conviction is that this cannot create a lasting peace.

This insight is one of the most profound bits I've read in the Sector General novels and supports quite a lot of philosophical debate. (Sadly, there isn't a lot of that in the story itself.) The backdrop against which it plays out is an accidental crash in a spaceport facility, creating a dangerous and potentially deadly environment for a variety of aliens. Given the collection in which this is included and the philosophical bent described above, you can probably guess where this goes, although I'll leave it unspoiled if you can't. It's an idea that could have been presented with more subtlety, but it's a really great piece of setting background that makes the whole series snap into focus. A much better story in context than its surface plot. (7)

"Survivor": The hospital ship Rhabwar rescues a sole survivor from the wreck of an alien ship caused by incomplete safeguards on hyperdrive generators. The alien is very badly injured and unconscious and needs the full attention of Sector General, but on the way back, the empath Prilicla also begins suffering from empathic hypersensitivity. Conway, the protagonist of most of this series, devotes most of his attention to that problem, having delivered the rescued alien to competent surgical hands. But it will surprise no regular reader that the problems turn out to be linked (making it a bit improbable that it takes the doctors so long to figure that out). A very typical entry in the series. (6)

"Investigation": Another very typical entry, although this time the crashed spaceship is on a planet. The scattered, unconscious bodies of the survivors, plus signs of starvation and recent amputation on all of them, convinces the military (well, police is probably more accurate) escort that this is may be a crime scene. The doctors are unconvinced, but cautious, and local sand storms and mobile vegetation add to the threat. I thought this alien design was a bit less interesting (and a lot creepier). (6)

"Combined Operation": The best (and longest) story of this collection. Another crashed alien spacecraft, but this time it's huge, large enough (and, as they quickly realize, of a design) to indicate a space station rather than a ship, except that it's in the middle of nowhere and each segment contains a giant alien worm creature. Here, piecing together the biology and the nature of the vehicle is only the beginning; the conclusion points to an even larger problem, one that requires drawing on rather significant resources to solve. (On a deadline, of course, to add some drama.) This story requires the doctors to go unusually deep into the biology and extrapolated culture of the alien they're attempting to rescue, which made it more intellectually satisfying for me. (7)

Followed by Star Healer.

Rating: 6 out of 10

Categories: FLOSS Project Planets

SMB on openSUSE Conference

Planet KDE - Sun, 2017-05-21 13:05

The annual openSUSE Conference 2017 is upcoming! Next weekend it will be again in the Z-Bau in Nuremberg, Germany.

The conference program is impressive and if you can make it, you should consider stopping by.

Stefan Schäfer from the Invis server project and me will organize a workshop about openSUSE for Small and Medium Business (SMB).

SMB is a long running concern of the heart of the two of us: Both Stefan, who even does it for living, and me have both used openSUSE in the area of SMB for long and we know how well it serves there. Stefan has even initiated the Invis Server Project, which is completely free software and builds on top of the openSUSE distributions. The Invis Server adds a whole bunch of extra functionality to openSUSE that is extremely useful in the special SMB usecase. It came a long way starting as Stefans own project long years ago, evolving as proper maintained openSUSE Spin in OBS with a small, but active community.

The interesting question is how openSUSE, Invis Server and other smaller projects like for example Kraft can unite and offer a reliable maintained and comprehensive solution for this huge group of potential users, that is now locked in to proprietary technologies mainly while FOSS can really make a difference here.

In the workshop we first will introduce the existing projects briefly, maybe discuss some technical questions like integration of new packages in the openSUSE distributions and such, and also touch organizational question like how we want to setup and market openSUSE SMB.

Participants in the workshop should not expect too much presentation. We rather hope for a lively discussion with many people bringing in their projects that might fit, their experiences and ideas. Don’t be shy



Categories: FLOSS Project Planets

Mike Driscoll: PyCon 2017 Videos are Up

Planet Python - Sun, 2017-05-21 12:31

The PyCon 2017 videos are already available on Youtube. Here are some highlights:

The Vanderplas Keynote

Raymond Hettinger’s “Modern Python Dictionaries — A confluence of a dozen great ideas”

Brett Cannon’s “What’s New in Python 3.6?”

And there’s much more here: https://www.youtube.com/channel/UCrJhliKNQ8g0qoE_zvL8eVg/feed

Categories: FLOSS Project Planets

Adnan Hodzic: Automagically deploy & run containerized WordPress (PHP7 FPM, Nginx, MariaDB) using Ansible + Docker on AWS

Planet Debian - Sun, 2017-05-21 12:28

In this blog post, I’ve described what started as simple migration of WordPress blog to AWS, ended up as automation project consisting of publishing multiple Ansible roles deploying and running multiple Docker images.

If you’re not interested in reading about my entire journey, cognition gains and how this process came to be, please skim down to “Birth of: containerized-wordpress-project (TL;DR)” section.

Migrating WordPress blog to AWS (EC2, Lightsail?)

Since I’ve been sold on Amazon’s AWS idea of cloud computing “services” for couple of years now. I’ve wanted, and been trying to migrate this (WordPress) blog to AWS, but somehow it never worked out.

Moving it to EC2 instance, with its own ELB volumes, AMI, EIP, Security Group … it just seemed as an overkill.

When AWS Lightsail was first released, it seemed that was an answer to all my problems.

But it wasn’t, disregarding its bit restrictive/dumbed down versions of original features. Living in Amsterdam, my main problem with it was that it was only available in a single US region.

Regardless, I thought it had everything I needed for WordPress site, and as a new service, it had great potential.

Its regional limitations were also good in a sense that they made me realize one important thing. And that’s once I migrate my blog to AWS, I want to be able to seamlessly move/migrate it across different EC2’s and different regions once they were available.

If done properly, it meant I could even have it moved across different clouds (I’m talking to you Google Cloud).

P.S: AWS Lightsail is now available in couple of different regions across Europe. Rollout which was almost smoothless.

Fundamental problem of every migration … is migration

Phase 1: Don’t reinvent the wheel?

When you have a WordPress site that’s not self hosted. You want everything to work, but yet you really don’t want to spend any time managing infrastructure it’s on.

And as soon as I started looking what could fit this criteria, I found that there were pre-configured, running out of box WordPress EC2 images available on AWS Marketplace, great!

But when I took a look, although everything was running out of box, I wasn’t happy with software stack it was all built on. Namely Ubuntu 14.04 and Apache, and all of the services were started using custom scripts. Yuck.

With this setup, when it was time to upgrade (and it’s already that time) you wouldn’t be thinking about upgrade. You’d only be thinking about another migration.

Phase 2: What if I built everything myself?

Installing and configuring everything manually, and then writing huge HowTo which I would follow when I needed to re-create whole stack was not an option. Same case with was scripting whole process, as overhead of changes that had to be tracked was too way too big.

Being a huge Ansible fan, automating this step was a natural next step.

I even found an awesome Ansible role which seemed like it’s going to do everything I need. Except, I realized I needed to update all software that’s deployed with it, and customize it since configuration it was deployed on wasn’t as generic.

So I forked it and got to work. But soon enough, I was knee deep in making and fiddling with various system changes. Something I was trying to get away in this case, and most importantly something I was trying to avoid when it was time for next update.

Phase 3: Marriage made in heaven: Ansible + Docker + AWS

Idea to have everything Dockerized was around from very start. However, it never made a lot of sense until I put Ansible into same picture. And it was at this point where my final idea and requirements become crystal clear.

Use Ansible to configure and setup host ready for Docker ecosystem. Ecosystem consisting of multiple separate containers for each required service (WordPress + Nginx + MariaDB). Link them all together as a single service using Docker Compose.

Idea was backed by thought to spend minimum to no time (and effort) on manual configuration of anything on the server. Level of attachment to this server was so low that I didn’t even want to SSH to it.

If there was something wrong, I could just nuke the whole thing and deploy code on a new healthy rolled out server with everything working out of box.

After it was clear what needed to be done, I got to work.

Birth of: containerized-wordpress-project (TL;DR)

After a lot of work, end result is project which allows you to automagically deploy & run containerized WordPress instance which consists of 3 separate containers running:

  • WordPress (PHP7 FPM)
  • Nginx
  • MariaDB

Once run, containerized-wordpress playbook will guide you through interactive setup of all 3 containers, after which it will run all  Ansible roles created for this project. End result is that host you have never even SSH-ed to will be fully configured and running containerized WordPress instace out of box.

Most importantly, this whole process will be completed in <= 5 minutes and doesn’t require any Docker or Ansible knowledge!

containerized-wordpress demo

Console output of running “containerized-wordpress” Ansible Playbook:

Accessing WordPress instance created from “containerized-wordpress” Ansible Playbook:

Did I end up migrating to AWS in the end?

You bet. Thanks to efforts made in containerized-wordpress-project, I’m happy to report my whole WordPress migration to AWS was completed in matter of minutes and that this blog is now running on Docker and on AWS!

I hope this same project will help you take a leap in your migration.

Happy hacking!

Categories: FLOSS Project Planets

Mike Driscoll: Python 2017 – Second Day

Planet Python - Sun, 2017-05-21 12:27

The second day of the PyCon 2017 conference was kicked off by breakfast with people from NASA and Pixar, among others, followed by several lightning talks. I didn’t see them all, but they were kind of fun. Then they moved on to the day’s first keynote by Lisa Guo & Hui Ding from Instagram. I hadn’t realized that they used Django and Python as their core technology.

They spoke on how they transitioned from Django 1.3 to 1.8 and Python 2 to 3. It was a really interesting talk and had a pretty deep dive into how they use Python at Instagram. It’s really neat to see Python being able to scale to several hundred million users. If I remember correctly, they also mentioned that Python 3 saved them 30% in memory utilization as compared with Python 2 along with a 12% boost in CPU utilization. They also mentioned that when they did their conversion, they did in the main branch by making it compatible with both Python 2 and 3 while continually releasing product to their users. You can see the video on Youtube:

The next keynote was done by Katy Huff, a nuclear engineer. While I personally didn’t find it as interesting as the Instagram one, it was fun to see how Python is being used in so many scientific communities and in so many disparate ways. If you’re interested, you watch the keynote here:

After that, I went to my first talk of the day which was Debugging in Python 3.6: Better, Faster, Stronger by Elizaveta Shashkova who works for the PyCharm team. Her talk focused on the new frame evaluation API that was introduced to CPython in PEP 523 and how it can make debugging easier and faster, albeit with a longer lead time to set up. Here’s the video:

Next up was Static Types for Python by Jukka Lehtosalo and David Fisher from the Dropbox team. They discussed how to use MyPy to introduce static typing using a live code demo as well as how they used it at Dropbox to add typing to 700,000 lines of code. I thought it was fascinating, even though I really enjoy Python’s dynamic nature. I can see this as a good way to enforce docstrings as well as make them more readable. Here’s the video:

After lunch, I went to an Open Space room about Python 201, which ended up being about what problems people face when they are trying to learn Python. It was really interesting and gave me many new insights into what people without a background in computer science are facing.

I attempted my own open space on wxPython, but somehow the room was commandeered by a group of people talking about drones and as far as I could tell, no one showed up to talk about wxPython. Disappointing, but whatever. I got to work on a fun wxPython project while I waited.

The last talk I attended was one given by Jean-Baptiste Aviat entitled Writing a C Python extension in 2017. He mentioned several different ways to interact with C/C++ with Python such as ctypes, cffi, Cython, and SWIG. His choice was ctypes. He was a bit hard to understand, so I highly recommend watching the video yourself to see what you think:

My other highlights were just random encounters in the hallways or at lunch where I got to meet other interesting people using Python.

Categories: FLOSS Project Planets

Dan Crosta: Introducing Tox-Docker

Planet Python - Sun, 2017-05-21 10:55

Today I released Tox-Docker to GitHub and the Python Package Index. Tox-Docker is a plugin for Tox, which, you guessed it, manages one or more Docker containers during test runs.

Why Tox-Docker?

Tox-Docker began its life because I needed to test some code that uses the PostgreSQL JSONB column type. In another life, I might have done this by instructing Jenkins to first install Postgres, start the server, create a user, create a database, and so on. There's even a small chance that this would work well, some of the time -- so long as tests didn't fail, the build didn't die unexpectedly without cleaning up, multiple tests didn't run at once, and so on. In fact, I've done exactly this sort of hackery a few times in the past already. It is dirty, and often requires manual cleanup after failures.

So, when confronted with the need to write tests that talk to a real Postgres instance, rather than reaching into my old toolbox I was determined to find a better solution. Docker can run multiple instances of Postgres at the same time in isolation from one another, which obviouates the need for mutual exclusion of builds. Docker containers are lightweigt to start, and easy to clean up (you can delete them all at once with a single command), so when the tests are done, we can simply remove it and move on.

There was still the question of how to manage the lifecycle of the container, though, which is where Tox comes in. Tox is a test automation tool that standardizes an interface between your machine (or your continuous integration environment) and your test code. Like Docker, Tox encourages isolation, by creating a clean virtualenv for each test run, free of old package installs, custom hacks, and so on. Tox already has a well-defined set of steps it runs to build your package, install dependencies, start tests, and gather results. Happily, it allows plugins to hook into this sequence to add custom behavior.

How Tox-Docker Works

Tox's plugins implement callback hooks to participate in the test workflow. For Tox-Docker, we use the pre-test and post-test hooks, which set up and tear down our Docker environment, respectively. Importantly, the post-test hook runs regardless of whether the tests passed, failed, or errored, ensuring that we'll have an opportunity to clean up any Docker containers we started during the pre-test hook. Finally, Tox plugins can also hook into the configuration system, so that projects using Tox-Docker can specify what Docker containers they require.

The simplest use of Tox-Docker is to specicy the Docker image or images, including version tags, that are required during test runs. For instance, if your project requires Postgres, you might add this to your tox.ini:

[testenv] docker = postgres:9.6

With Tox-Docker installed, the next time you run tox, you will see something like the following:

py27 docker: pull 'postgres:9.6' py27 docker: run 'postgres:9.6' py27 runtests: PYTHONHASHSEED='944551639' py27 runtests: commands[0] | py.test test_your_project.py ============================= test session starts ============================== platform linux2 -- Python 2.7.12, pytest-3.0.7, py-1.4.33, pluggy-0.4.0 rootdir: /home/travis/build/you/your-project, inifile: collected 3 items test_your_project.py ... =========================== 3 passed in 0.02 seconds =========================== py27 docker: remove '72e2ffea02' (forced)

Tox-Docker has picked up your configuration, and pulled and started a PostgreSQL container, which it shuts down after the tests finish. This is equivalent to running docker pull, docker run, and docker rm yourself, but without the manual hassle.

Challenges and Helpers

Not every Dockerized component can be started and be expected to "just work". Most services will require or allow for some amount of configuration, and your tests will need some information back out of Docker to know how to use the services. In particular, we need:

  1. A way to pass settings into Docker containers, in cases where the defaults are not sufficient
  2. A way to inform the tests how to communicate with the service inside the container, specifically, what ports are exposed
  3. A way to delay beginning the test run until the container has started and the application within it is ready to work

Tox-Docker lets you specify environment variables with the dockerenv setting in the tox.ini file:

[testenv] docker = postgres:9.6 dockerenv = POSTGRES_USER=user_name POSTGRES_DB=database_name

Tox-Docker takes these variables and passes them to Docker as it launches the container, just as you might do manually with the --env flag. These variables are also made available in the environment that tests run in, so they can be used to construct a connection string, for instance.

Additionally, Tox-Docker interrogates the container just after it's started, to get the list of exposed TCP or UDP ports. For each port, Tox-Docker constructs an environment variable named after the container and exposed port, whose value is the host-side port number that Docker has mapped the exposed port to. Postgres listens on TCP port 5432 within the container, which might be mapped to port 32187 on your host system. In this case, an environment variable POSTGRES_5432_TCP will be set with value "32187".

Tests can use these environment variables to parameterize connections to the Dockerized services, rather than having to hard-code knowledge of the environment.

Finally, in order to avoid false-negative test failures or errors, Tox-Docker waits until it can connect to each of the ports exposed by the Docker container. This is not a perfect way to determine that the service inside is actually ready, but Docker provides no way for a service inside the container to signal to the outside world that it's finished starting up. In practice, I hope that this heuristic is good enough.

Future Work

The most obvious next step for Tox-Docker is to support Docker Compose, the Docker tool that lets you launch a cluster of interconnected containers in a single command. For the projects I am working with, I haven't yet had need of Docker Compose, but for projects of a certain level of complexity this will be preferable to attempting to manually manage this in tox.ini.

Installation and Feedback

Tox-Docker is available in the Python Package Index for installation via pip install tox-docker. Contributions, suggestions, questions, and feedback are welcome via GitHub.

Categories: FLOSS Project Planets

Bryan Pendleton: Back online

Planet Apache - Sun, 2017-05-21 09:01

I took a break from computers.

I had a planned vacation, and so I did something that's a bit rare for me: I took an 11 day break from computers.

I didn't use any desktops or laptops. I didn't have my smartphone with me.

I went 11 days without checking my email, or signing on to various sites where I'm a regular, or opening my Feedly RSS read, or anything like that.

Now, I wasn't TOTALLY offline: there were newspapers and television broadcasts around, and I was traveling with other people who had computers.

But, overall, it was a wonderful experience to just "unplug" for a while.

I recommend it highly.

Categories: FLOSS Project Planets

Elena 'valhalla' Grandi: Modern XMPP Server

Planet Debian - Sun, 2017-05-21 07:30
Modern XMPP Server

I've published a new HOWTO on my website http://www.trueelena.org/computers/howto/modern_xmpp_server.html:

http://www.enricozini.org/blog/2017/debian/modern-and-secure-instant-messaging/ already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.


I've decided to install https://prosody.im/, mostly because it was recommended by the RTC QuickStart Guide http://rtcquickstart.org/; I've heard that similar results can be reached with https://www.ejabberd.im/ and other servers.

I'm also targeting https://www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).

Installation and prerequisites

You will need to enable the https://backports.debian.org/ repository and then install the packages prosody and prosody-modules.

You also need to setup some TLS certificates (I used Let's Encrypt https://letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide http://rtcquickstart.org/guide/multi/xmpp-server-prosody.html for more details.

On your firewall, you'll need to open the following TCP ports:

  • 5222 (client2server)

  • 5269 (server2server)

  • 5280 (default http port for prosody)

  • 5281 (default https port for prosody)

The latter two are needed to enable some services provided via http(s), including rich media transfers.

With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:

prosodyctl adduser alice@example.org

In-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim https://en.wikipedia.org/wiki/Messaging_spam).

prosody configuration

You can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.

First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:

c2s_require_encryption = true
s2s_secure_auth = true

and then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:

s2s_insecure_domains = { "gmail.com" }


For each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:

VirtualHost "chat.example.org"
enabled = true
ssl = {
key = "/etc/ssl/private/example.org-key.pem";
certificate = "/etc/ssl/public/example.org.pem";

For the domains where you also want to enable MUCs, add the follwing lines:

Component "conference.chat.example.org" "muc"
restrict_room_creation = "local"

the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.

You can also add the following line to enable rich media transfers via http uploads (XEP-0363):

Component "upload.chat.example.org" "http_upload"

The defaults are pretty sane, but see https://modules.prosody.im/mod_http_upload.html for details on what knobs you can configure for this module

Don't forget to enable the virtualhost by linking the file inside /etc/prosody/conf.d/.

additional modules

Most of the other interesting XEPs are enabled by loading additional modules inside /etc/prosody/prosody.cfg.lua (under modules_enabled); to enable mod_something just add a line like:


Most of these come from the prosody-modules package (and thus from https://modules.prosody.im/ ) and some may require changing when prosody 0.10 will be available; when this is the case it is mentioned below.

  • mod_carbons (XEP-0280)
    To keep conversations syncronized while using multiple devices at the same time.

    This will be included by default in prosody 0.10.

  • mod_privacy + mod_blocking (XEP-0191)
    To allow user-controlled blocking of users, including as an anti-spim measure.

    In prosody 0.10 these two modules will be replaced by mod_privacy.

  • mod_smacks (XEP-0198)
    Allow clients to resume a disconnected session before a customizable timeout and prevent message loss.

  • mod_mam (XEP-0313)
    Archive messages on the server for a limited period of time (default 1 week) and allow clients to retrieve them; this is required to syncronize message history between multiple clients.

    With prosody 0.9 only an in-memory storage backend is available, which may make this module problematic on servers with many users. prosody 0.10 will fix this by adding support for an SQL backed storage with archiving capabilities.

  • mod_throttle_presence + mod_filter_chatstates (XEP-0352)
    Filter out presence updates and chat states when the client announces (via Client State Indication) that the user isn't looking. This is useful to reduce power and bandwidth usage for "useless" traffic.

@Gruppo Linux Como @LIFO
Categories: FLOSS Project Planets
Syndicate content