FLOSS Project Planets

Building Qt apps with Travis CI and Docker

Planet KDE - 10 hours 17 min ago

I recently configured Travis CI to build Nanonote, my minimalist note-taking application. We use Jenkins a lot at work, and despite the fact that I dislike the tool itself, it has proven invaluable in helping us catch errors early. So I strongly believe in the values of Continuous Integration.

When it comes to CI setup, I believe it is important to keep your distances with the tool you are using by keeping as much setup as possible in tool-agnostic scripts, versioned in your repository, and making the CI server use these scripts.

Ensuring your build scripts are independent of your CI server gives you a few advantages:

  • Your setup is easier to extend and debug, since you can run the build scripts on your machine. This is lighter than running an instance of your CI server on your local machine (nobody takes the time to do that anyway) and more efficient than committing changes in a temporary branch then wait for your CI server to build them to see if you got it right (everybody does that).

  • It keeps the build instructions next to your code, instead of being stored in, say, Jenkins XML file. This ensures that you can add dependencies and adjust the build script in one commit. It also ensures that if your build script evolves, you can still build old branches on the CI server (for example because you have a fix to do on a released version).

  • If your CI server is Jenkins or something similar, you spend less time cursing against the slow web-based UI (yes, I know about Jenkins Pipelines, but those have other problems...).

  • It is easier to switch to another CI server.

With this in mind, here is how I configured Nanonote CI.

Create a Build environment using Docker

The first element is to create a stable build environment. To do this I created a Docker with the necessary build components. Here is its Dockerfile, stored in the ci directory of the repository:

FROM ubuntu:18.04 RUN apt-get update \ && apt-get install -y -qq --no-install-recommends \ cmake \ dpkg-dev \ file \ g++ \ make \ ninja-build \ python3 \ python3-pip \ python3-setuptools \ qt5-default \ qtbase5-dev \ qttools5-dev \ rpm \ xvfb COPY requirements.txt /tmp RUN pip3 install -r /tmp/requirements.txt ENTRYPOINT ["/bin/bash"]

Nothing really complicated here, but there are a few interesting things to point out nevertheless.

It installs dpkg-dev and rpm packages, so that CPack can build .deb and .rpm packages.

It also installs the xvfb package, to be able to run tests which require an X server.

Finally it copies a requirements.txt file and pip install it. This is to install qpropgen dependencies. This requirements.txt is in 3rdparty/qpropgen, which Docker cannot reach (because it only sees files inside the ci directory), so I created a simple ci/build-docker script to build the Docker image:

#!/bin/sh set -ev cd $(dirname $0) cp ../3rdparty/qpropgen/requirements.txt . docker build -t nanonote:1 .

This gives us a clean build environment, now lets create a build script.

The build script

This script is ci/build-app. Its job is to:

  1. Create a source tarball
  2. Build and run tests from this source tarball
  3. Build .rpm and .deb packages

You may wonder why the script creates a source tarball, since GitHub automatically generates them when one creates a tag. There are two reasons for this:

  1. GitHub tarballs do not contain repository submodules, making them useless for Nanonote.
  2. I prefer to rely on my own build script to generate the source tarball as it makes the project less dependant on GitHub facilities, should I decide to move to another git host service.

Reason #1 also explains why the script builds from the source tarball instead of using the git repository source tree: it ensures the tarball is not missing any file necessary to build the app.

I am not going to include the script here, but you can read it on GitHub.

Travis setup

Now that we have a build script and a build environment, we can make Travis uses them. Here is Nanonote .travis.yml file. As you can see, it is just a few lines:

dist: xenial language: minimal services: - docker install: - ci/build-docker script: - docker run -v $PWD:/root/nanonote nanonote:1 /root/nanonote/ci/build-app

Not much to say here: - We tell Travis to use an Ubuntu Xenial (16.04) distribution and Docker. - The "install" step builds the Docker image. - The "script" step mounts the source tree inside the Docker image and runs the build script.

Travis runs this on all branches pushed to GitHub. I configured GitHub to refuse pushes to the master branch if the commits have not been validated by Travis. This rule applies to all project contributors, including me. Since there is not (for now?) a large community of contributors to the project, I don't open pull requests: I just push commits to the dev branch, once Travis has checked them, I merge dev into master.

Releases

When it's time to create a release, I just do what Travis does: rebuild the Docker image then run the build script inside it.

Since the source tree is mounted inside the Docker image, I get the source and binary packages in the dist directory of the repository, so I can test them and publish them.

Travis has a publication system to automatically attach build artefacts to GitHub releases when building from a tag, but I prefer to build them myself because that gives me the opportunity to test the build artefacts before tagging and it prevents me from becoming too dependent on Travis service.

Conclusion

That's it for this description of Nanonote CI setup. It's still young so I might refine it, but it is already useful. I am probably going to create similar setups for my other C++ Qt projects.

I hope it helped you as well.

Categories: FLOSS Project Planets

Clint Adams: Using hkt findpaths in a more boring way

Planet Debian - 11 hours 32 min ago

Did dkg certify his new key with something I've certified?

hkt findpaths --keyring ~/.gnupg/pubring.gpg '' \ 2100A32C46F895AF3A08783AF6D3495BB0AE9A02 \ C4BC2DDB38CCE96485EBE9C2F20691179038E5C6 2>/dev/null (3,[46,31,257]) (31,0EE5BE979282D80B9F7540F1CCD2ED94D21739E9) (46,2100A32C46F895AF3A08783AF6D3495BB0AE9A02) (257,C4BC2DDB38CCE96485EBE9C2F20691179038E5C6)

I (№ 46) have certified № 31 (0EE5BE979282D80B9F7540F1CCD2ED94D21739E9) which has certified № 257 (C4BC2DDB38CCE96485EBE9C2F20691179038E5C6).

Posted on 2019-01-19 Tags: quanks
Categories: FLOSS Project Planets

Codementor: Regular Expressions in Python

Planet Python - 12 hours 6 min ago
Introduction Regular Expressions are a sequence of characters used to find and replace patterns in a string In simple terms it is a tool for matching patterns in text. In python we have "re" module...
Categories: FLOSS Project Planets

gamingdirectional: Render game scene with Panda 3D

Planet Python - 15 hours 50 min ago

Today we will continue to explore Panda 3D, after a day of searching online for the method to export the whole mesh created with Blender which can then be used in Panda 3D’s game I have found two of them. 1) Exporting the mesh in the Direct (x) format 2) Using YABEE to export the mesh in the egg file format. I have tried both and the result is still the same, the mesh has been rendered on...

Source

Related posts: Draw a line with Pygame Summarize the python pygame project Sound on Sound off Create a next level scene for pygame project Create a pool object for enemy ships Create a background object class in Pygame Create the spaceship in Rock Sweeper Detect the overlapping between two game objects with Pygame Moving the sprite with Vector in Pygame Increase the speed of the enemy ship
Categories: FLOSS Project Planets

Daniel Kahn Gillmor: New OpenPGP certificate for dkg, 2019

Planet Debian - 18 hours 56 min ago
Update

I've scrapped my first try at a new OpenPGP certificate for 2019 (the one i published yesterday). See the history discussion at the bottom of this post for details. This blogpost has been updated to reflect my revised attempt.

2019 OpenPGP transition (try 2)

My old OpenPGP certificate will be 12 years old later this year. I'm transitioning to a new OpenPGP certificate.

You might know my old OpenPGP certificate as:

pub rsa4096 2007-06-02 [SC] [expires: 2019-06-29] 0EE5BE979282D80B9F7540F1CCD2ED94D21739E9 uid Daniel Kahn Gillmor <dkg@fifthhorseman.net> uid Daniel Kahn Gillmor <dkg@debian.org>

My new OpenPGP certificate is:

pub ed25519 2019-01-19 [C] [expires: 2021-01-18] C4BC2DDB38CCE96485EBE9C2F20691179038E5C6 uid Daniel Kahn Gillmor <dkg@fifthhorseman.net> uid Daniel Kahn Gillmor <dkg@debian.org>

If you've certified my old certificate, I'd appreciate your certifying my new one. Please do confirm by contacting me via whatever channels you think are most appropriate (including in-person if you want to share food or drink with me!) before you re-certify, of course.

I've published the new certificate to the SKS keyserver network, as well as to my personal website -- you can fetch it like this:

wget -O- https://dkg.fifthhorseman.net/dkg-openpgp.key | gpg --import

A copy of this transition statement signed by both the old and new certificates is available on my website, and you can also find further explanation about technical details, choices, and rationale on my blog.

Technical details

I've made a few decisions differently about this certificate:

Ed25519 and Curve25519 for Public Key Material

I've moved from 4096-bit RSA public keys to the Bernstein elliptic curve 25519 for all my public key material (EdDSA for signing, certification, and authentication, and Curve25519 for encryption). While 4096-bit RSA is likely to be marginally stronger cryptographically than curve 25519, 25519 still appears to be significantly stronger than any cryptanalytic attack known to the public.

Additionally, elliptic curve keys and the signatures associated with them are tiny compared to 4096-bit RSA. I certified my new cert with my old one, and well over half of the new certificate is just certifications from the old key because they are so large.

This size advantage makes it easier for me ship the public key material (and signatures from it) in places that would be more awkward otherwise. See the discussion about Autocrypt below.

Split out ACLU identity

Note that my old certificate included some additional identities, including job-specific e-mail addresses. I've split out my job-specific cryptographic credentials to a different OpenPGP certificate entirely. If you want to mail me at dkg@aclu.org, you can use the certificate with fingerprint 888E6BEAC41959269EAA177F138F5AB68615C560 (which is also published on my work bio page).

This is in part because the folks who communicate with me at my ACLU address are more likely to have old or poorly-maintained e-mail systems than other people i communicate with, and they might not be able to handle curve 25519. So the ACLU keys use 3072-bit RSA, which is universally supported by any plausible OpenPGP implementation.

This way i can experiment with being more forward-looking in my free software and engineering community work, and shake out any bugs that i might find there, before cutting over the e-mails that come in from more legal- and policy-focused colleagues.

Isolated Subkey Capabilities

In my new certificate, the primary key is designated certification-only. There are three subkeys, one each for authentication, encryption, and signing. The primary key also has a longer expiration time (2 years as of this writing), while the subkeys have 1 year expiration dates.

Isolating this functionality helps a little bit with security (i can take the certification key entirely offline while still being able to sign non-identity data), and it also offers a pathway toward having a more robust subkey rotation schedule. As i build out my tooling for subkey rotation, i'll probably make a few more blog posts about that.

Autocrypt-friendly

Finally, several of these changes are related to the Autocrypt project, a really great collaboration of a group of mail user agent developers, designers, UX experts, trainers, and users, who are providing guidance to make encrypted e-mail something that normal humans can use without having to think too much about it.

Autocrypt treats the OpenPGP certificate User IDs as merely decorative, but its recommended form of the User ID for an OpenPGP certificate is just the e-mail address wrapped in angle brackets. Unfortunately, i didn't manage to get that particular form of User ID into this certificate at this time (see discussion of split User IDs below).

Autocrypt is also moving toward 25519 elliptic curve keys, so this gives me a chance to exercise that choice.

I'm proud to be associated with the Autocrypt project, and have been helping to shepherd some of the Autocrypt functionality into different clients (my work on my own MUA of choice, notmuch is currently stalled, but i hope to pick it back up again soon). Having an OpenPGP certificate that works well with Autocrypt, and that i can stuff into messages even from clients that aren't fully-Autocrypt compliant yet is useful to me for getting things tested.

Documenting workflow vs. tooling

Some people may want to know "how did you make your OpenPGP cert like this?" For those folks, i'm sorry but this is not a step-by-step technical howto. I've read far too many "One True Way To Set Up Your OpenPGP Certificate" blog posts that haven't aged well, and i'm not confident enough to tell people to run the weird arbitrary commands that i ran to get things working this way.

Furthermore, i don't want people to have to run those commands.

If i think there are sensible ways to set up OpenPGP certificates, i want those patterns built into standard tooling for normal people to use, without a lot of command-line hackery.

So if i'm going to publish a "how to", it would be in the form of software that i think can be sensibly maintained and provides a sane user interface for normal humans. I haven't written that tooling yet, but i need to change certs first, so for now you just get this blog post in English. But feel free to tell me what you think i could do better!

History

This is my second attempt at an OpenPGP certificate transition in 2019. My earlier attempt uncovered a bunch of tooling issues with split-out User IDs. The original rationale for trying the split, and the problems i found are detailed below.

What were Separated User IDs?

My earlier attempt at a new OpenPGP certificate for 2019 tried to do an unusual thing with the certificate User IDs. Rather than two User IDs:

  • Daniel Kahn Gillmor <dkg@fifthhorseman.net>
  • Daniel Kahn Gillmor <dkg@debian.org>

the (now revoked) earlier certificate had the name separate from the e-mail addresses, making three User IDs:

  • Daniel Kahn Gillmor
  • dkg@fifthhorseman.net
  • dkg@debian.org

There are a couple reasons i tried this.

One reason is to simplify the certification process. Traditional OpenPGP User ID certification is an all-or-nothing process: the certifier is asserting that both the name and e-mail address belong to the identified party. But this can be tough to reason about. Maybe you know my name, but not my e-mail address. Or maybe you know my over e-mail, but aren't really sure what my "real" name is (i'll leave questions about what counts as a real name to a more philosophical blog post). You ought to be able to certify them independently. Now you can, since it's possible to certify one User ID independently of another.

Another reason is because i planned to use this certificate for e-mail, among other purposes. In e-mail systems, the human name is a confusing distraction, as the real underlying correspondent is the e-mail address. E-mail programs should definitely allow their users to couple a memorable name with an e-mail address, but it should be more like a petname. The bundling of a human "real" name with the e-mail address by the User ID itself just provides more points of confusion for the mail client.

If the user communicates with a certain person by e-mail address, the certificate should be bound to the e-mail protocol address on its own. Then the user themselves can decide what other monikers they want to use for the person; the User ID shouldn't force them to look at a "real" name just because it's been bundled together.

Alas, putting this attempt into public practice uncovered several gaps in the OpenPGP ecosystem.

User IDs without an e-mail address are often ignored, mishandled, or induce crashes:

And User IDs that are a raw e-mail address (without enclosing angle-brackets) tickle additional problems.

Finally, Monkeysphere's ssh user authentication mechanism typically works on a single User ID at a time. There's no way in Monkeysphere to say "authorize access to account foo by any OpenPGP certificate that has a valid User ID Alice Jones and a valid User ID <alice@example.org>. I'd like to keep the ~/.monkeysphere/authorized_user_ids that i already have in place working OK. I have enough technical debt to deal with for Monkeysphere (including that it only handles RSA currently) that i don't need the additional headache of reasoning about split/joint User IDs too.

Because of all of these issues, in particular the schleuder bugs, i'm not ready to use a split User ID OpenPGP certificate on today's Internet, alas. I have revoked the OpenPGP certificate that had split User IDs and started over with a new certificate with a more standard User ID layout, as described above. Better to rip of the band-aid quickly!

Categories: FLOSS Project Planets

codingdirectional: Tidy up the user interface of the video editing application

Planet Python - 20 hours 43 min ago

Hello and welcome back, it has been a day since the last post and today we will continue to edit our video application project. After I have included the final feature for this project I can now concentrate on the user interface part. Mine ideology is always to focus on the main objective first before working on the small details when it comes to programming, as long as we have destroyed the main battleship then it will easy to take on those small battleships that have lost their main supply line.

In this article, we will create the below user interface which consists of a button to select the video file, a checkbox to remove the audio and another checkbox for adding new audio.

The new user interface

Below is the entire program.

from tkinter import * from tkinter import filedialog import os import subprocess import tkinter.ttk as tk win = Tk() # Create instance win.title("NeWw Vid") # Add a title win.resizable(0, 0) # Disable resizing the GUI win.configure(background='white') # change background color mainframe = Frame(win) # create a frame mainframe.pack() buttonFrame = Frame(win) # create a button frame buttonFrame.pack(side = BOTTOM, fill=X) # Create a label #aLabel = Label(win, text="Select video size and video", anchor="center", padx=13, pady=10, relief=RAISED) #aLabel.grid(column=0, row=0, sticky=W+E) #aLabel.configure(foreground="black") #aLabel.configure(background="white") #aLabel.configure(wraplength=110) # Create a combo box vid_size = StringVar() # create a string variable preferSize = tk.Combobox(mainframe, textvariable=vid_size) preferSize['values'] = (1920, 1280, 854, 640) # video width in pixels preferSize.grid(column=0, row=1) # the position of combo box preferSize.current(0) # select item one preferSize.pack(side = LEFT, expand = TRUE) removeAudioVal = IntVar() removeAudio = tk.Checkbutton(mainframe, text="Remove Audio", variable=removeAudioVal) removeAudio.pack(side = LEFT, padx=3) newAudio = IntVar() aNewAudio = tk.Checkbutton(mainframe, text="New Audio", variable=newAudio) aNewAudio.pack(side = LEFT, padx=2) # Open a video file def openVideo(): fullfilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Video file", "*.mp4; *.avi ")]) # select a video file from the hard drive if(newAudio.get() == 1): audiofilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Audio file", "*.wav; *.ogg ")]) # select a new audio file from the hard drive if(fullfilename != ''): scale_vid = preferSize.get() # retrieve value from the comno box new_size = str(scale_vid) dir_path = os.path.dirname(os.path.realpath(fullfilename)) os.chdir(dir_path) f = new_size + '.mp4' # the new output file name/format f2 = f + '.mp4' # webm video noAudio = removeAudioVal.get() # get the checkbox state for audio #subprocess.call(['ffmpeg', '-stream_loop', '2', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', f]) # resize and loop the video with ffmpeg #subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', '-r', '24', f]) # resize and speed up the video with ffmpeg #subprocess.call(['ffmpeg', '-i', f, '-ss', '00:02:30', '-y', f2]) # create animated gif starting from 2 minutes and 30 seconds to the end subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', f]) # resize the video with ffmpeg if(noAudio == 1): subprocess.call(['ffmpeg', '-i', f, '-c', 'copy', '-y', '-an', f2]) # remove audio from the original video if(audiofilename != '' and noAudio == 1): subprocess.call(['ffmpeg', '-i', f2, '-i', audiofilename, '-shortest', '-c:v', 'copy', '-c:a', 'aac', '-b:a', '256k', '-y', f]) # add audio to the original video, trim either the audio or video depends on which one is longer #subprocess.call(['ffmpeg', '-i', f, '-vf', 'eq=contrast=1.3:brightness=-0.03:saturation=0.01', '-y', f2]) # adjust the saturation contrast and brightness of video #subprocess.call(['ffmpeg', '-i', f, '-y', f2]) # converting the video with ffmpeg action_vid = tk.Button(buttonFrame, text="Open Video", command=openVideo) action_vid.pack(fill=X) win.mainloop()

Not bad for now, we will continue to modify the user interface in the next chapter. Below is the new video which this program has created.

http://islandstropicalman.tumblr.com/post/182129073622/music-with-a-lot-of-ants
Categories: FLOSS Project Planets

Thomas Guest: Python Counters @PyDiff

Planet Python - Fri, 2019-01-18 19:00

On Monday I gave a talk at PyDiff on the subject of Python Counters. A Counter is a specialised dict which has much in common with a set. Of course all dicts are like sets, but with Counters the resemblance is even stronger. The documentation states:

The Counter class is similar to bags or multisets in other languages.

Alongside set operations of union, intersection and is-a-subset, Counter also supports addition and subtraction — natural and unsurprising operations for a container whose job is to keep count of its elements. If you want to unite the contents of two Counters, it’s probably + you want rather than |.

Counters came to mind as a lightning talk subject since I had a go at the Advent of Code last year and used no fewer than 12 Counters in my solutions to 25 puzzles — and that total could well increase since I haven’t finished yet.

The talk itself is on github.

Categories: FLOSS Project Planets

Ben's SEO Blog: 6 Tips to Rock Drupal 8 SEO

Planet Drupal - Fri, 2019-01-18 15:55

This article was originally written on 2017-04-12 but has been updated with current information and SEO best practices.

Drupal is phenomenal for SEO. When you use Drupal 8 for your content management system, you have a powerful tool to rock search engine optimization. I’ve worked in Drupal for 12 years and I’ve experienced firsthand how quickly search engines respond to a well-optimized Drupal website. I’ve seen customers triple their traffic in weeks after upgrading from another platform. I’ve seen competitive advantages from site-wide optimizations like RDF or AMP that put my clients on the cutting edge of SEO because they use Drupal. The benefits are a faster website, higher rankings and more traffic.

One of the main reasons Drupal is the content management system of... Read the full article: 6 Tips to Rock Drupal 8 SEO

Categories: FLOSS Project Planets

Rhonda D'Vine: Enigma

Planet Debian - Fri, 2019-01-18 10:57

Just the other day a working colleague asked me what kind of music I listen to, especially when working. It's true, music helps me to focus better and work more concentrated. But it obviously depends on what kind of music it is. And there is one project I come to every now and then. The name is Enigma. It's not disturbing, good for background, with soothing and non-intrusive vocals. Here are the songs:

  • Return To Innocence: This is quite likely the song you know from them, which also got me hooked up originally.
  • Push The Limits: A powerful song. The album version is even a few minutes longer.
  • Voyageur: Love the rhythm and theme in this song.

Like always, enjoy.

/music | permanent link | Comments: 0 | Flattr this

Categories: FLOSS Project Planets

Acquia Developer Center Blog: Building Usable Conversations: How to Approach Conversational Interfaces

Planet Drupal - Fri, 2019-01-18 09:49

To kick off 2019 properly, the Experience Express is taking a break from Drupal and web development to consider an oft-forgotten component of new digital experiences in the conversational space. Though many organizations, some of Acquia's customers included, have leapt headlong into building conversational interfaces, sometimes it can be difficult in such a newfangled paradigm to consider all possible angles where things can go awry.

Tags: acquia drupal planet
Categories: FLOSS Project Planets

Stack Abuse: Lambda Functions in Python

Planet Python - Fri, 2019-01-18 09:10
What are Lambda Functions?

In Python, we use the lambda keyword to declare an anonymous function, which is why we refer to them as "lambda functions". An anonymous function refers to a function declared with no name. Although syntactically they look different, lambda functions behave in the same way as regular functions that are declared using the def keyword. The following are the characteristics of Python lambda functions:

  • A lambda function can take any number of arguments, but they contain only a single expression. An expression is a piece of code executed by the lambda function, which may or may not return any value.
  • Lambda functions can be used to return function objects.
  • Syntactically, lambda functions are restricted to only a single expression.

In this article, we will discuss Python's lambda functions in detail, as well as show examples of how to use them.

Creating a Lambda Function

We use the following syntax to declare a lambda function:

lambda argument(s): expression

As stated above, we can have any number of arguments but only a single expression. The lambda operator cannot have any statements and it returns a function object that we can assign to any variable.

For example:

remainder = lambda num: num % 2 print(remainder(5))

Output

1

In this code the lambda num: num % 2 is the lambda function. The num is the argument while num % 2 is the expression that is evaluated and the result of the expression is returned. The expression gets the modulus of the input parameter by 2. Providing 5 as the parameter, which is divided by 2, we get a remainder of 1.

You should notice that the lambda function in the above script has not been assigned any name. It simply returns a function object which is assigned to the identifier remainder. However, despite being anonymous, it was possible for us to call it in the same way that we call a normal function. The statement:

lambda num: num % 2

Is similar to the following:

def remainder(num): return num % 2

Here is another example of a lambda function:

product = lambda x, y : x * y print(product(2, 3))

Output

6

The lambda function defined above returns the product of the values of the two arguments.

Why Use Lambda Functions?

Lambda functions are used when you need a function for a short period of time. This is commonly used when you want to pass a function as an argument to higher-order functions, that is, functions that take other functions as their arguments.

The use of anonymous function inside another function is explained in the following example:

def testfunc(num): return lambda x : x * num

In the above example, we have a function that takes one argument, and the argument is to be multiplied with a number that is unknown. Let us demonstrate how to use the above function:

def testfunc(num): return lambda x : x * num result1 = testfunc(10) print(result1(9))

Output

90

In the above script, we use a lambda function to multiply the number we pass by 10. The same function can be used to multiply the number by 1000:

def testfunc(num): return lambda x : x * num result2 = testfunc(1000) print(result2(9))

Output

9000

It is possible for us to use the testfunc() function to define the above two lambda functions within a single program:

def testfunc(num): return lambda x : x * num result1 = testfunc(10) result2 = testfunc(1000) print(result1(9)) print(result2(9))

Output

90 9000

Lambda functions can be used together with Python's built-in functions like map(), filter() etc.

In the following section, we will be discussing how to use lambda functions with various Python built-in functions.

The filter() Function

The Python's filter() function takes a lambda function together with a list as the arguments. It has the following syntax:

filter(object, iterable)

The object here should be a lambda function which returns a boolean value. The object will be called for every item in the iterable to do the evaluation. The result is either a True or a False for every item. Note that the function can only take one iterable as the input.

A lambda function, along with the list to be evaluated, is passed to the filter() function. The filter() function returns a list of those elements that return True when evaluated by the lambda funtion. Consider the example given below:

numbers_list = [2, 6, 8, 10, 11, 4, 12, 7, 13, 17, 0, 3, 21] filtered_list = list(filter(lambda num: (num > 7), numbers_list)) print(filtered_list)

Output

[8, 10, 11, 12, 13, 17, 21]

In the above example, we have created a list named numbers_list with a list of integers. We have created a lambda function to check for the integers that are greater than 7. This lambda function has been passed to the filter() function as the argument and the results from this filtering have been saved into a new list named filtered_list.

The map() Function

The map() function is another built-in function that takes a function object and a list. The syntax of map function is as follows:

map(object, iterable_1, iterable_2, ...)

The iterable to the map() function can be a dictionary, a list, etc. The map() function basically maps every item in the input iterable to the corresponding item in the output iterable, according to the logic defined by the lambda function. Consider the following example:

numbers_list = [2, 6, 8, 10, 11, 4, 12, 7, 13, 17, 0, 3, 21] mapped_list = list(map(lambda num: num % 2, numbers_list)) print(mapped_list)

Output

[0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1]

In the script above, we have a list numbers_list, which consists of random numbers. We then call the map() function and pass it a lambda function as the argument. The lambda function calculates the remainder after dividing each number by 2. The result of the mapping is stored in a list named mapped_list. Finally, we print out the contents of the list.

Conclusion

In Python, a lambda function is a single-line function declared with no name, which can have any number of arguments, but it can only have one expression. Such a function is capable of behaving similarly to a regular function declared using the Python's def keyword. Often times a lambda function is passed as an argument to another function.

In this article we explained the syntax, use-cases, and examples of commonly used lambda functions.

Categories: FLOSS Project Planets

WeKnow: Drupal South 2019 / Round Two

Planet Drupal - Fri, 2019-01-18 07:16
Drupal South 2019 / Round Two Australian Drupal community and market are growing stronger. Participated in Drupal South Event for a second time, where I learned more about GovCMS and presented about Gatsby to create a React Application to accelerate integrations. enzo Fri, 01/18/2019 - 12:16
Categories: FLOSS Project Planets

Keith Packard: newt-duino

Planet Debian - Fri, 2019-01-18 00:19
Newt-Duino: Newt on an Arduino

Here's our target system. The venerable Arduino Duemilanove. Designed in 2009, this board comes with the Atmel ATmega328 system on chip, and not a lot else. This 8-bit microcontroller sports 32kB of flash, 2kB of RAM and another 1kB of EEPROM. Squeezing even a tiny version of Python onto this device took some doing.

How Small is Newt Anyways?

From my other ?postings about Newt, the amount of memory and set of dependencies for running Newt has shrunk over time. Right now, a complete Newt system fits in about 30kB of text on both Cortex M and ATmel processors. That includes the parser and bytecode interpreter, plus garbage collector for memory management.

Bare Metal Arduino

The first choice to make is whether to take some existing OS and get that running, or to just wing it and run right on the metal. To save space, I decided (at least for now), to skip the OS and implement whatever hardware support I need myself.

Newt has limited dependencies; it doesn't use malloc, and the only OS interface the base language uses is getchar and putchar to the console. That means that a complete Newt system need not be much larger than the core Newt language and some simple I/O routines.

For the basic Arduino port, I included some simple serial I/O routines for the console to bring the language up. Once running, I've started adding some simple I/O functions to talk to the pins of the device.

Pushing data out of .data

The ATmega 328P, like most (all?) of the 8-bit Atmel processors cannot directly access flash memory as data. Instead, you are required to use special library functions. Whoever came up with this scheme clearly loved the original 8051 design because it worked the same way.

Modern 8051 clones (like the CC1111 we used to use for Altus Metrum stuff) fix this bug by creating a flat 16-bit address space for both flash and ram so that 16-bit pointers can see all of memory. For older systems, SDCC actually creates magic pointers and has run-time code that performs the relevant magic to fetch data from anywhere in the system. Sadly, no-one has bothered to do this with avr-gcc.

This means that any data accesses done in the normal way can only touch RAM. To make this work, avr-gcc places all data, read-write and read-only in RAM. Newt has some pretty big read-only data bits, including parse tables and error messages. With only 2kB of RAM but 32kB of flash, it's pretty important to avoid filling that with read-only data.

avr-gcc has a whole 'progmem' mechanism which allows you to direct data into living only in flash by decorating the declaration with PROGMEM:

const char PROGMEM error_message[] = "This is in flash, not RAM";

This is pretty easy to manage, the only problem is that attempts to access this data from your program will fail unless you use the pgm_read_word and pgm_read_byte functions:

const char *m = error_message; char c; while ((c = (char) pgm_read_byte(m++))) putchar(c);

avr-libc includes some functions, often indicated with a '_P' suffix, which take pointers to flash instead of pointers to RAM. So, it's possible to place all read-only data in flash and not in RAM, it's just a pain, especially when using portable code.

So, I hacked up the newt code to add macros for the parse tables, one to add the necessary decoration and three others to fetch elements of the parse table by address.

#define PARSE_TABLE_DECLARATION(t) PROGMEM t #define PARSE_TABLE_FETCH_KEY(a) ((parse_key_t) pgm_read_word(a)) #define PARSE_TABLE_FETCH_TOKEN(a) ((token_t) pgm_read_byte(a)) #define PARSE_TABLE_FETCH_PRODUCTION(a) ((uint8_t) pgm_read_byte(a))

With suitable hacks in Newt to use these macros, I could finally build newt for the Arduino.

Automatically Getting Strings out of RAM

A lot of the strings in Newt are printf format strings passed directly to a printf-ish functions. I created some wrapper macros to automatically move the format strings out of RAM and call functions expecting the strings to be in flash. Here's the wrapper I wrote for fprintf:

#define fprintf(file, fmt, args...) do { \ static const char PROGMEM __fmt__[] = (fmt); \ fprintf_P(file, __fmt__, ## args); \ } while(0)

This assumes that all calls to fprintf will take constant strings, which is true in Newt, but not true generally. I would love to automatically handle those cases using __builtin_const_p, but gcc isn't as helpful as it could be; you can't declare a string to be initialized from a variable value, even if that code will never be executed:

#define fprintf(file, fmt, args...) do { \ if (__builtin_const_p(fmt)) { \ static const char PROGMEM __fmt__[] = (fmt); \ fprintf_P(file, __fmt__, ## args); \ } else { \ fprintf(file, fmt, ## args); \ } \ } while(0)

This doesn't compile when 'fmt' isn't a constant string because the initialization of fmt, even though never executed, isn't legal. Suggestions on how to make this work would be most welcome. I only need this for sprintf, so for now, I've created a 'sprintf_const' macro which does the above trick that I use for all sprintf calls with a constant string format.

With this hack added, I saved hundreds of bytes of RAM, making enough space for an (wait for it) 900 byte heap within the interpreter. That's probably enough to do some simple robotics stuff, with luck sufficient for my robotics students.

It Works! > def fact(x): > r = 1 > for y in range(2,x): > r *= y > return r > > fact(10) 362880 > fact(20) * fact(10) 4.41426e+22 >

This example was actually run on the Arduino pictured above.

Future Work

All I've got running at this point is the basic language and a couple of test primitives to control the LED on D13. Here are some things I'd like to add.

Using EEPROM for Program Storage

To make the system actually useful as a stand-alone robot, we need some place to store the application. Fortunately, the ATmega328P has 1kB of EEPROM memory. I plan on using this for program storage and will parse the contents at start up time.

Simple I/O API

Having taught students using Lego Logo, I've learned that the fewer keystrokes needed to get the first light blinking the better the students will do. Seeking to replicate that experience, I'm thinking that the I/O API should avoid requiring any pin mode settings, and should have functions that directly manipulate any devices on the board. The Lego Logo API also avoids functions with more than one argument, preferring to just save state within the system. So, for now, I plan to replicate that API loosely:

def onfor(n): talkto(LED) on() time.sleep(n) off()

In the Arduino environment, a motor controller is managed with two separate bits, requiring two separate numbers and lots of initialization:

int motor_speed = 6; int motor_dir = 5; void setup() { pinMod(motor_dir, OUTPUT); } void motor(int speed, int dir) { digitalWrite(motor_dir, dir); analogWrite(motor_speed, speed); }

By creating an API which knows how to talk to the motor controller as a unified device, we get:

def motor(speed, dir): talkto(MOTOR_1) setdir(dir) setpower(speed) Plans

I'll see about getting the EEPROM bits working, then the I/O API running. At that point, you should be able to do anything with this that you can with C, aside from performance.

Then some kind of host-side interface, probably written in Java to be portable to whatever machine the user has, and I think the system should be ready to experiment with.

Links

The source code is available from my server at https://keithp.com/cgit/newt.git/, and also at github https://github.com/keith-packard/newt. It is licensed under the GPLv2 (or later version).

Categories: FLOSS Project Planets

Daniel Kahn Gillmor: New OpenPGP certificate for dkg, 2019

Planet Debian - Thu, 2019-01-17 23:43

My old OpenPGP certificate will be 12 years old later this year. I'm transitioning to a new OpenPGP certificate.

You might know my old OpenPGP certificate as:

pub rsa4096 2007-06-02 [SC] [expires: 2019-06-29] 0EE5BE979282D80B9F7540F1CCD2ED94D21739E9 uid Daniel Kahn Gillmor <dkg@fifthhorseman.net> uid Daniel Kahn Gillmor <dkg@debian.org>

My new OpenPGP certificate is:

pub ed25519 2019-01-17 [C] [expires: 2021-01-16] 723E343AC00331F03473E6837BE5A11FA37E8721 uid Daniel Kahn Gillmor uid dkg@debian.org uid dkg@fifthhorseman.net

If you've certified my old certificate, I'd appreciate your certifying my new one. Please do confirm by contacting me via whatever channels you think are most appropriate (including in-person if you want to share food or drink with me!) before you re-certify, of course.

I've published the new certificate to the SKS keyserver network, as well as to my personal website -- you can fetch it like this:

wget -O- https://dkg.fifthhorseman.net/dkg-openpgp.key | gpg --import

A copy of this transition statement signed by both the old and new certificates is available on my website, and you can also find further explanation about technical details, choices, and rationale on my blog.

Technical details

I've made a few decisions differently about this certificate:

Ed25519 and Curve25519 for Public Key Material

I've moved from 4096-bit RSA public keys to the Bernstein elliptic curve 25519 for all my public key material (EdDSA for signing, certification, and authentication, and Curve25519 for encryption). While 4096-bit RSA is likely to be marginally stronger cryptographically than curve 25519, 25519 still appears to be significantly stronger than any cryptanalytic attack known to the public.

Additionally, elliptic curve keys and the signatures associated with them are tiny compared to 4096-bit RSA. I certified my new cert with my old one, and well over half of the new certificate is just certifications from the old key because they are so large.

This size advantage makes it easier for me ship the public key material (and signatures from it) in places that would be more awkward otherwise. See the discussion below.

Separated User IDs

The other thing you're likely to notice if you're considering certifying my key is that my User IDs are now split out. rather than two User IDs:

  • Daniel Kahn Gillmor <dkg@fifthhorseman.net>
  • Daniel Kahn Gillmor <dkg@debian.org>

I now have the name separate from the e-mail addresses, making three User IDs:

  • Daniel Kahn Gillmor
  • dkg@fifthhorseman.net
  • dkg@debian.org

There are a couple reasons i've done this.

One reason is to simplify the certification process. Traditional OpenPGP User ID certification is an all-or-nothing process: the certifier is asserting that both the name and e-mail address belong to the identified party. But this can be tough to reason about. Maybe you know my name, but not my e-mail address. Or maybe you know my over e-mail, but aren't really sure what my "real" name is (i'll leave questions about what counts as a real name to a more philosophical blog post). You ought to be able to certify them independently. Now you can, since it's possible to certify one User ID independently of another.

Another reason is because i plan to use this certificate for e-mail, among other purposes. In e-mail systems, the human name is a confusing distraction, as the real underlying correspondent is the e-mail address. E-mail programs should definitely allow their users to couple a memorable name with an e-mail address, but it should be more like a petname. The bundling of a human "real" name with the e-mail address by the User ID itself just provides more points of confusion for the mail client.

If the user communicates with a certain person by e-mail address, the key should be bound to the e-mail protocol address on its own. Then the user themselves can decide what other monikers they want to use for the person; the User ID shouldn't force them to look at a "real" name just because it's been bundled together.

Split out ACLU identity

Note that my old certificate included some additional identities, including job-specific e-mail addresses. I've split out my job-specific cryptographic credentials to a different OpenPGP key entirely. If you want to mail me at dkg@aclu.org, you can use key 888E6BEAC41959269EAA177F138F5AB68615C560 (which is also published on my work bio page).

This is in part because the folks who communicate with me at my ACLU address are more likely to have old or poorly-maintained e-mail systems than other people i communicate with, and they might not be able to handle curve 25519. So the ACLU key is using 3072-bit RSA, which is universally supported by any plausible OpenPGP implementation.

This way i can experiment with being more forward-looking in my free software and engineering community work, and shake out any bugs that i might find there, before cutting over the e-mails that come in from more legal- and policy-focused colleagues.

Isolated Subkey Capabilities

In my new certificate, the primary key is designated certification-only. There are three subkeys, one each for authentication, encryption, and signing. The primary key also has a longer expiration time (2 years as of this writing), while the subkeys have 1 year expiration dates.

Isolating this functionality helps a little bit with security (i can take the certification key entirely offline while still being able to sign non-identity data), and it also offers a pathway toward having a more robust subkey rotation schedule. As i build out my tooling for subkey rotation, i'll probably make a few more blog posts about that.

Autocrypt-friendly

Finally, several of these changes are related to the Autocrypt project, a really great collaboration of a group of mail user agent developers, designers, UX experts, trainers, and users, who are providing guidance to make encrypted e-mail something that normal humans can use without having to think too much about it.

Autocrypt treats the OpenPGP certificate User IDs as merely decorative, but its recommended form of the User ID for an OpenPGP certificate is just the bare e-mail address. With the User IDs split out as described above, i can produce a minimized OpenPGP certificate that's Autocrypt-friendly, including only the User ID that matches my sending e-mail address precisely, and leaving out the rest.

I'm proud to be associated with the Autocrypt project, and have been helping to shepherd some of the Autocrypt functionality into different clients (my work on my own MUA of choice, notmuch is currently stalled, but i hope to pick it back up again soon). Having an OpenPGP certificate that works well with Autocrypt, and that i can stuff into messages even from clients that aren't fully-Autocrypt compliant yet is useful to me for getting things tested.

Documenting workflow vs. tooling

Some people may want to know "how did you make your OpenPGP cert like this?" For those folks, i'm sorry but this is not a step-by-step technical howto. I've read far too many "One True Way To Set Up Your OpenPGP Certificate" blog posts that haven't aged well, and i'm not confident enough to tell people to run the weird arbitrary commands that i ran to get things working this way.

Furthermore, i don't want people to have to run those commands.

If i think there are sensible ways to set up OpenPGP certificates, i want those patterns built into standard tooling for normal people to use, without a lot of command-line hackery.

So if i'm going to publish a "how to", it would be in the form of software that i think can be sensibly maintained and provides a sane user interface for normal humans. I haven't written that tooling yet, but i need to change keys first, so for now you just get this blog post in English. But feel free to tell me what you think i could do better!

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppArmadillo 0.9.200.7.0

Planet Debian - Thu, 2019-01-17 21:00

A new RcppArmadillo bugfix release arrived at CRAN today. The version 0.9.200.7.0 is another minor bugfix release, and based on the new Armadillo bugfix release 9.200.7 from earlier this week. I also just uploaded the Debian version, and Uwe’s systems have already create the CRAN Windows binary.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 559 other packages on CRAN.

This release just brings minor upstream bug fixes, see below for details (and we also include the updated entry for the November bugfix release).

Changes in RcppArmadillo version 0.9.200.7.0 (2019-01-17)
  • Upgraded to Armadillo release 9.200.7 (Carpe Noctem)

  • Fixes in 9.200.7 compared to 9.200.5:

    • handling complex compound expressions by trace()

    • handling .rows() and .cols() by the Cube class

Changes in RcppArmadillo version 0.9.200.5.0 (2018-11-09)
  • Upgraded to Armadillo release 9.200.5 (Carpe Noctem)

  • Changes in this release

    • linking issue when using fixed size matrices and vectors

    • faster handling of common cases by princomp()

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Reuven Lerner: Beyond the “hello, world” of Python’s “print” function

Planet Python - Thu, 2019-01-17 19:03

One of the first things that anyone learns in Python is (of course) how to print the string, “Hello, world.”  As you would expect, the code is straightforward and simple:

print('Hello, world')

And indeed, Python’s “print” function is so easy and straightforward to use that we barely give it any thought.  We assume that people know how to use it — and for the most part, for most of the things they want to do, that’s true.

But lurking beneath the surface of the “print” function is a lot of functionality, as well as some history (and even a bit of pain).  Understanding how to use “print” can cut down on the code you write, and generally make it easier for you to work with.

The basics

The basics are simple: “print” is a function, which means that if you want to invoke it, you need to use parentheses:

>>> print('hello') hello

You can pass any type of data to “print”. Strings are most common, but you can also ints, floats, lists, tuples, dicts, sets, or any other object. For example:

>>> print(5) 5

or

>>> print([10, 20, 30]) [10, 20, 30]

And of course, it doesn’t matter whether the thing you’re trying to print is passed as a literal object, or referenced by a variable:

>>> d = {'a':1, 'b':2, 'c':3}>>> print(d) {'a':1, 'b':2, 'c':3}

You can also put an expression inside of the parentheses; the value of the expression will be passed to “print”:

>>> print(3+5) 8 >>> print([10, 20] + [30, 40]) [10, 20, 30, 40]

Every object in Python knows how to display itself as a string, which means that you can pass it directly to “print”. There isn’t any need to turn things into strings before handing them to “print”:

print(str([10, 20, 30])    # unnecessary use of "str" [10, 20, 30]

After “print” displays its output, it adds a newline.  For example:

>>> print('abc') >>> print('def') >>> print('ghi') abc def ghi

You can pass as many arguments as you want to “print”, separated by commas. Each will be printed, in order, with a space between them:

>>> print('abcd', 'efgh', [10, 20, 30], 99, 'ijkl') abcd efgh [10, 20, 30] 99 ijkl

We’ll see, below, how we can change these two default behaviors.

Inputs and outputs

If “print” is a function, then it must have a return value. Let’s take a look:

>>> x = print('abcd') >>> type(x) NoneType

In other words: “print” returns None, no matter what you print. After all, you’re not printing in order to get a return value, but rather for the side effect.

What about arguments to “print”?  Well, we’ve already seen that we can pass any number of arguments, each of which will be printed.  But there are some optional parameters that we can pass, as well.

The two most relevant ones allow us to customize the behavior we saw before, changing what string appears between printed items and what is placed at the end of the output.

The “sep” parameter, for example, defaults to ‘ ‘ (a space character), and is placed between printed items.  We can set this to any string, including a multi-character string:

>>> print('a', 'b', 'c', sep='*') a*b*c >>> print('abc', 'def', 'ghi', sep='***') abc***def***ghi >>> print([10, 20, 30], [40, 50, 60], [70, 80, 90], sep='***') [10, 20, 30]***[40, 50, 60]***[70, 80, 90]

Notice that “sep” is placed between the arguments to “print”, not between the elements of each argument.  Thus in this third example, the ‘***’ goes between the lists, rather than between the integer elements of the lists.

If you want the arguments to be printed alongside one another, you can set “sep” to be an empty string:

>>> print('abc', 'def', 'ghi', sep='') abcdefghi

Similarly, the “end” parameter defaults to ‘\n’ (newline), but can contain any string. It determines what’s printed after “print” is done.

For example, if you want to have some extra lines after you print something, just change “end” so that it has a few newlines:

>>> def foo(): print('abc', end='\n\n\n') print('def', end='\n\n\n') >>> foo() abc def

If, by contrast, you don’t want “print” to add a newline at the end of what you print, you can set “end” to be an empty string:

>>> def foo(): print('abc', end='') print('def', end='') >>> foo() abcdef>>>

Notice how in the Python interactive shell, using the empty string to print something means that the next ‘>>>’ prompt comes after what you printed.  After all, you didn’t ask for there to be a newline after what you wrote, and Python complied with your request.

Of course, you can pass values for “end” that don’t involve newlines at all. For example, let’s say that you want to output multiple fields to the screen, with each field printed in a separate line:

>>> def foo(): print('abc', end=':') print('def', end=':') print('ghi') >>> foo() abc:def:ghi Printing to files

By default, “print” sends its data to standard output, known in Python as “sys.stdout”.  While the “sys” module is automatically loaded along with Python, its name isn’t available unless you explicitly “import sys”.

The “print” function lets you specify, with the “file” parameter, another file-like object (i.e., one that adheres to the appropriate protocol) to which you want to write. The object must be writable, but other than that, you can use any object.

For example:

>>> f = open('myfile.txt', 'w') >>> print('hello') hello >>> print('hello???', file=f) >>> print('hello again') hello again >>> f.close() >>> print(open('myfile.txt').read()) hello???

In this case, the output was written to a file.  But we could also have written to a StringIO object, for example, which acts like a file but isn’t one.

Note that if I hadn’t closed “f” in the above example, the output wouldn’t have arrived in the file. That’s because Python buffers all output by default; whenever you write to a file, the data is only actually written when the buffer fills up (and is flushed), when you invoke the “flush” method explicitly, or when you close the file, and thus flush implicitly. Using the “with” construct with a file object closes it, and thus flushes the buffers as well.

There is another way to flush the output buffer, however: We can pass a True value to the “flush” parameter in “print”.  In such a case, the output is immediately flushed to disk, and thus written.  This might sound great, but remember that the point of buffering is to lessen the load on the disk and on the computer’s I/O system. So flush when you need, but don’t do it all of the time — unless you’re paid by the hour, and it’s in your interest to have things work more slowly.

Here’s an example of printing with and without flush:

>>> f = open('myfile.txt', 'w') >>> print('abc', file=f) >>> print('def', file=f) >>> print(open('myfile.txt').read()) # no flush, and thus empty file >>> print('ghi', file=f, flush=True) >>> print(open('myfile.txt').read()) # all data has been flushed to disk abc def ghi

You might have noticed a small inconsistency here: “print” writes to files, by default “sys.stdout”. And if we don’t flush or close the file, the output is buffered.  So, why don’t we have to flush (or close, not that this is a good idea) when we print to the screen?

The answer is that “sys.stdout” is treated specially by Python. As the Python docs say, it is “line buffered,” meaning that every time we send a newline character (‘\n’), the output is flushed.  So long as you are printing things to “sys.stdout” that end with a newline — and why wouldn’t you be doing that? — you won’t notice the buffering.

Remember Python 2?

As I write this, in January 2019, there are fewer than 12 months remaining before Python 2 is no longer supported or maintained. This doesn’t change the fact that many of my clients are still using Python 2 (because rewriting their large code base isn’t worthwhile or feasible).  If you’re still using Python 2, you should really be trying to move to Python 3.

And indeed, one of the things that strikes people moving from Python 2 to 3 would be the differences in “print”.

First and foremost, “print” in Python 2 is a statement, not an expression. This means that the parentheses in 2 are optional, while they’re mandatory in 3 — one of the first things that people learn when they move from 2 to 3.

This also means that “print” in Python 2 cannot be passed to other functions. In Python 3, you can.

Python 2’s “print” statement didn’t have the parameters (or defaults) that we have at our disposal.  You wanted to print to a file other than “sys.stdout”?  Assign it to “sys.stdout” to use “print” — or just write to the file with the “write” method for files.  You wanted “print” not to descend a line after printing?  Put a comma at the end of the line.  (Yes, really; this is ugly, but it works.)

What if you’re working in Python 2, and want to get a taste of Python 3’s print function?  You can add this line to your code:

from __future__ import print_function

Once you have done so, Python 3’s “print” function will be in place.

Now I know that Python 3 is no longer in the future; indeed, you could say that Python 2 is in the past. But for many people who want to transition or learn how to do it, this is a good method. But watch out: If you have calls to “print” without parentheses, or are commas to avoid descending a line, then you’ll need to do more than just this import.  You will need to go through your code, and make sure that it works in this way. So while that might seem like a wise way to to, it’s only the first step of a much larger transition from 2 to 3 that you’ll need to make.

Enjoyed this article?  Join more than 11,000 other developers who receive my free, weekly “Better developers” newsletter. Every Monday, you’ll get an article like this one about software development and Python:

Email Address
Name
Website

 

The post Beyond the “hello, world” of Python’s “print” function appeared first on Lerner Consulting Blog.

Categories: FLOSS Project Planets

On Wallpapers

Planet KDE - Thu, 2019-01-17 18:49
TL;DR, I’ll be switching to releasing new wallpapers every second Plasma release, on even-numbered versions.

This is just a post to refer to for those who have asked me about Plasma 5.15 and a new wallpaper. Since I started working on Plasma 5 wallpapers, there has always been a number of factors determining how exactly I made them. After some agonising debate I’ve decided to slow the wallpaper release pace, because as time has gone on a number of things have changed since I started contributing them:

Bugs & Releases. One of the early goals for wallpapers was to have one for each release so developers could identify versions of Plasma in screenshots of bug reports. This has become far less important, as issues have gone from “the bug causing everyone’s machines to catch fire while eating kittens” to things like “maybe stealing your left sock from the dryer”. Back in the day most distros didn’t offer rolling release options, so users would be reporting the bugs and sharing screenshots of old buggy versions. That, too, has changed; not only are rolling release options more plentiful, but standard release distros are well passed the dark days of immature Plasma 5. All said and done, we just don’t need wallpapers for developers to identify problem releases anymore; the bugs are far less severe and people are more up-to-date.

LTS Plasma versions & quality. While it may seem irrelevant to wallpapers, LTS stands out to as the place where we really need to pour love and care into our designs. With each new wallpaper I’m pushing things a bit harder and a bit further which means taking more time to create them, and I’m realising that at the quality I want to drive out LTS wallpapers with, it might take 3 to 5 dedicated days to produce a final product. That’s not including post-reveal tweaks I do after receiving feedback, or the wallpapers I discard during the creation process (for each wallpaper released, it’s likely I got halfway through 2 other designs). In other words, it’s becoming less sustainable.

The wallpapers aren’t crap anymore. It’s no secret, my first wallpapers were rough. When a new wallpaper was finished there were real quality incentives for me to take the lessons learned and turn-around a better wallpaper. Nowadays though most new wallpapers are visually pleasing and people don’t mind if they stick around for a bit longer. I know a lot of people even go back to previous wallpapers. Adding to this, it’s gotten easy to get older wallpapers; OpenDesktop, GetHotNewStuff both serve as easy access, and we now have some of the most popular default wallpapers in the extended wallpapers package. While new wallpapers are always nice to have, it’s no longer bad to keep what we’ve got.

Between those big three points, it brings me to moving the wallpaper cycle to every second Plasma release. New wallpapers will fall on even-numbered Plasma releases, landing squarely on the LTS releases and a feature release directly between LTS’s. That being said I hope that future wallpapers will show quality reflecting what the additional time will afford me to do.

Categories: FLOSS Project Planets

Anastasia Tsikoza: Week 5: Resolving the blocker

Planet Debian - Thu, 2019-01-17 18:42
Photo by Pascal Debrunner on Unsplash

Hi, I am Anastasia, an Outreachy intern working on the project “Improve integration of Debian derivatives with Debian infrastructure and community”. Here are my other posts that haven’t appeared in the feed on time:

This post is about my work on the subscription feature for Debian derivatives - first of the two main issues to be resolved within my internship. And this week’s topic from the organizers is “Think About Your Audience”, especially newcomers to the community and future Outreachy applicants. So I’ll try to write about the feature keeping the most important details but taking into account that the readers might be unfamiliar with some terms and concepts.

What problem I was trying to solve

As I wrote here, around the middle of December the Debian derivatives census was re-enabled. It means that the census scripts started running on a scheduled basis. Every midnight they scanned through the derivatives’ wiki pages, retrieved all the info needed, processed it and formed files and directories for future use.

At each stage different kinds of errors could occur. Usually that means the maintainers should be contacted so that they update wiki pages of their derivatives or fix something in their apt repositories. Checking error logs, identifying new issues, notifying maintainers about them and keeping in mind who has already been notified and who has not - all this always was a significant amount of work which should have been eventually automated (at least partially).

What exactly I needed to do

By design, the subscription system had nothing in common with the daily spam mailing. Notifications should have only been sent when new issues appeared, which doesn’t happen too often.

It was decided to create several small logs for each derivative and then merge them into one. The resulting log should have also had comments, like which issues are new and which ones have been fixed since the last log was sent.

So my task could be divided into three parts:

  • create a way for folks to subscribe
  • write a script joining small logs into separate log for each particular derivative
  • write a script sending the log to subscribers (if it has to be sent)

The directory with the small logs was supposed to be kept until the next run to be compared with the new small logs. If some new issues appeared, the combined log would be sent to subscribers; otherwise it would simply be stored in the derivative’s directory.

Overcoming difficulties

I ran into first difficulties almost immediately after my changes were deployed. I forgot about the parallel execution of make: in the project’s crontab file we use the make --jobs=3, which means three parallel Make jobs are creating files and directories concurrently. So all can turn to chaos if you weren’t extra careful with the makefiles.

Every time I needed to fix something, Paul had to run a cron job and create most of the files from zero. And for nearly two days we’ve pulled together, coordinating actions through IRC. At the first day’s end the main problem seemed to be solved and I let myself rejoice… and then things went wrong. I was at my wits’ end and felt extremely tired, so I went off to bed. The next day I found the solution seconds after waking up!

Cool things

When finally all went without a hitch, it was a huge relief to me. And I am happy that some folks have already signed up for the errors, and hope they consider the feature useful.

Also I was truly pleased to know that Paul had first learned about makefiles’ order-only prerequisites from my code! That was quite a surprise :)

More details

If you are curious, in the Projects section a more detailed report will appear soon.

Categories: FLOSS Project Planets

Pages