FLOSS Project Planets

Third alpha release of my project

Planet KDE - Fri, 2020-07-03 10:45

I’m glad to announce the third alpha of my GSoC 2020 project. For anyone not in the loop, I’m working on integrating Disney’s SeExpr expression language as a new type of Fill Layer.

Releases are available here:

Integrity hashes:

d5aa5138650c58ac93e16e5eef9e74f81d7eb4d3fa733408cee25e791bb7a3e1 krita-4.3.1-alpha-0b32800-x86_64.appimage 634d1c0dedc96bc8b267f02b5c431245eefde021a1e7b8e6fcdce33f5e62c25a krita-4.3.1-alpha-0b32800992-x86_64.zip

In this release, I fixed the following issues:

  • SeExpr textures use the scRGB color space, which is not supported by Qt’s QColor until 5.12. This makes the conversion to Krita space unbearably slow (thanks Wolthera van Hövell)
  • Refactored SeExpr error reporting to make messages Qt-translatable.
    • This adds KDE’s ECM to the list of (optional) dependencies of SeExpr.
  • Error reporting is now available, including highlighting! (thanks Wolthera too for noticing)
  • Configuration is saved and restored when changing between Fill layer types (bug 422885, thanks Boudewijn Rempt)
  • Cleaned up SeExpr headers
    • They are now installed only if used in the UI library itself.
  • UI labels have extra spacing (thanks Wolthera van Hövell)

Another outstanding issue is SeExpr’s vulnerability to the current LC_NUMERIC locale, due to its use of sscanf and atof. I am sad to announce I won’t be able to change this; the library I wanted to use, scn is itself vulnerable to locale changes.

But the most important feature, and final contribution, are bundleable presets!

This enables SeExpr scripts to be bundled just like any other resource in Krita. Below you can find a bundle containing all of the example scripts posted by Wolthera on the Krita Artists thread.

Link: https://dump.amyspark.me/Krita_Artists'_SeExpr_examples.bundle

Integrity hash:

1e4a1bc6a9b8238cee96dfee9a50e7db903fe7b665758caf731d53c96597dc20 Krita_Artists'_SeExpr_examples.bundle

Looking forward to your comments!

Cheers,

~amyspark

PS: The Git hash in the files point to a cleaned up branch at my own fork of Krita. I am reviewing options to sync this work back to the main krita/4.3 branch.

Categories: FLOSS Project Planets

Michael Prokop: Grml 2020.06 – Codename Ausgehfuahangl

Planet Debian - Fri, 2020-07-03 10:32

We did it again™, at the end of June we released Grml 2020.06, codename Ausgehfuahangl. This Grml release (a Linux live system for system administrators) is based on Debian/testing (AKA bullseye) and provides current software packages as of June, incorporates up to date hardware support and fixes known issues from previous Grml releases.

I am especially fond of our cloud-init and qemu-guest-agent integration, which makes usage and automation in virtual environments like Proxmox VE much more comfortable.

Once as the Qemu Guest Agent setting is enabled in the VM options (also see Proxmox wiki), you’ll see IP address information in the VM summary:

Using a cloud-init drive allows using an SSH key for login as user "grml", and you can control network settings as well:

It was fun to focus and work on this new Grml release together with Darsha, and we hope you enjoy the new Grml release as much as we do!

Categories: FLOSS Project Planets

Norbert Preining: KDE/Plasma Status Update 2020-07-04

Planet Debian - Fri, 2020-07-03 10:06

Great timing for 4th of July, here is another status update of KDE/Plasma for Debian. Short summary: everything is now available for Debian sid and testing, for both i386 and am64 architectures!

With Qt 5.14 arriving in Debian/testing, and some tweaks here and there, we finally have all the packages (2 additional deps, 82 frameworks, 47 Plasma, 216 Apps) built on both Debian unstable and Debian testing, for both amd64 and i386 architectures. Again, big thanks to OBS!

Repositories:
For Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Unstable/ ./

For Testing:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Testing/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Testing/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Testing/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Testing/ ./

As usual, don’t forget that you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

Enjoy.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: #28: Welcome RSPM and test-drive with Bionic and Focal

Planet Debian - Fri, 2020-07-03 09:05

Welcome to the 28th post in the relatively random R recommendations series, or R4 for short. Our last post was a “double entry” in this R4 series and the newer T4 video series and covered a topic touched upon in this R4 series multiple times: easy binary install, especially on Ubuntu.

That post already previewed the newest kid on the block: RStudio’s RSPM, now formally announced. In the post we were only able to show Ubuntu 18.04 aka bionic. With the formal release of RSPM support has been added for Ubuntu 20.04 aka focal—and we are happy to announce that of course we added a corresponding Rocker r-rspm container. So you can now take full advantage of RSPM either via docker pull rocker/r-rspm:18.04 or via docker pull rocker/r-rspm:20.04 covering the two most recent LTS releases.

RSPM is a nice accomplishment. Covering multiple Linux distributions is an excellent achievement. Allowing users to reason in terms of the CRAN packages (i.e. installing xml2, not r-cran-xml2) eases use. Doing it from via the standard R command install.packages() (or wrapper around it like our install.r from littler package) is very good too and an excellent technical achievement.

There is, as best as I can tell, only one shortcoming, along with one small bit of false advertising. The shortcoming is technical. By bringing the package installation into the user application domain, it is separated from the system and lacks integration with system libraries. What do I mean here? If you were to add R to a plain Ubuntu container, say 18.04 or 20.04, then added the few lines to support RSPM and install xml2 it would install. And fail. Why? Because the system library libxml2 does not get installed with the RSPM package—whereas the .deb from the distribution or PPAs does. So to help with some popular packages I added libxml2, libunits and a few more for geospatial work to the rocker/r-rspm containers. Being already present ensures packages xml2 and units can run immediately. Please file issue tickets at the Rocker repo if you come across other missing libraries we could preload. (A related minor nag is incomplete coverage. At least one of my CRAN packages does not (yet?) come as a RSPM binary. Then again, CRAN has 16k packages, and the RSPM coverage is much wider than the PPA one. But completeness would be neat. The final nag is lack of Debian support which seems, well, odd.)

So what about the small bit of false advertising? Well it is claimed that RSPM makes installation “so much faster on Linux”. True, faster than the slowest possible installation from source. Also easier. But we had numerous posts on this blog showing other speed gains: Using ccache. And, of course, using binaries. And as the initial video mentioned above showed, installing from the PPAs is also faster than via RSPM. That is easy to replicate. Just set up the rocker/r-ubuntu:20.04 (or 18.04) container alongside the rocker/r-rspm:20.04 (or also 18.04) container. And then time install.r rstan (or install.r tinyverse) in the RSPM one against apt -y update; apt install -y r-cran-rstan (or ... r-cran-tinyverse). In every case I tried, the installation using binaries from the PPA was still faster by a few seconds. Not that it matters greatly: both are very, very quick compared to source installation (as e.g. shown here in 2017 (!!)) but the standard Ubuntu .deb installation is simply faster than using RSPM. (Likely due to better CDN usage so this may change over time. Neither method appears to do downloads in parallel so there is scope for both for doing better.)

So in sum: Welcome to RSPM, and nice new tool—and feel free to “drive” it using rocker/r-rspm:18.04 or rocker/r-rspm:20.04.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Stack Abuse: How to Write a Makefile - Automating Python Setup, Compilation, and Testing

Planet Python - Fri, 2020-07-03 08:26
Introduction

When you want to run a project that has multiple sources, resources, etc., you need to make sure that all of the code is recompiled before the main program is compiled or run.

For example, imagine our software looks something like this:

main_program.source -> uses the libraries `math.source` and `draw.source` math.source -> uses the libraries `floating_point_calc.source` and `integer_calc.source` draw.source -> uses the library `opengl.source`

So if we make a change in opengl.source for example, we need to recompile both draw.source and main_program.source because we want our project to be up-to-date on all ends.

This is a very tedious and time-consuming process. And because all good things in the software world come from some engineer being too lazy to type in a few extra commands, Makefile was born.

Makefile uses the make utility, and if we're to be completely accurate, Makefile is just a file that houses the code that the make utility uses. However, the name Makefile is much more recognizable.

Makefile essentially keeps your project up to date by rebuilding only the necessary parts of your source code whose children are out of date. It can also automatize compilation, builds and testing.

In this context, a child is a library or a chunk of code which is essential for its parent's code to run.

This concept is very useful and is commonly used with compiled programming languages. Now, you may be asking yourself:

Isn't Python an interpreted language?

Well, Python is technically both an interpreted and compiled language, because in order for it to interpret a line of code, it needs to precompile it into byte code which is not hardcoded for a specific CPU, and can be run after the fact.

A more detailed, yet concise explanation can be found on Ned Batchelder's blog. Also, if you need a refresher on how Programming Language Processors work, we've got you covered.

Concept Breakdown

Because Makefile is just an amalgamation of multiple concepts, there are a few things you'll need to know in order to write a Makefile:

  1. Bash Scripting
  2. Regular Expressions
  3. Target Notation
  4. Understanding your project's file structure

With these in hand, you'll be able to write instructions for the make utility and automate your compilation.

Bash is a command language (it's also a Unix shell but that's not really relevant right now), which we will be using to write actual commands or automate file generation.

For example, if we want to echo all the library names to the user:

DIRS=project/libs for file in $(DIRS); do echo $$file done

Target notation is a way of writing which files are dependent on other files. For example, if we want to represent the dependencies from the illustrative example above in proper target notation, we'd write:

main_program.cpp: math.cpp draw.cpp math.cpp: floating_point_calc.cpp integer_calc.cpp draw.cpp: opengl.cpp

As far as file structure goes, it depends on your programming language and environment. Some IDEs automatically generate some sort of Makefile as well, and you won't need to write it from scratch. However, it's very useful to understand the syntax if you want to tweak it.

Sometimes modifying the default Makefile is even mandatory, like when you want to make OpenGL and CLion play nice together.

Bash Scripting

Bash is mostly used for automation on Linux distributions, and is essential to becoming an all-powerful Linux "wizard". It's also an imperative script language, which makes it very readable and easy to understand. Note that you can run bash on Windows systems, but it's not really a common use case.

First let's go over a simple "Hello World" program in Bash:

# Comments in bash look like this #!/bin/bash # The line above indicates that we'll be using bash for this script # The exact syntax is: #![source] echo "Hello world!"

When creating a script, depending on your current umask, the script itself might not be executable. You can change this by running the following line of code in your terminal:

chmod +x name_of_script.sh

This adds execute permission to the target file. However, if you want to give more specific permissions, you can execute something similar to the following command:

chmod 777 name_of_script.sh

More information on chmod on this link.

Next, let's quickly go over some basics utilizing simple if-statements and variables:

#!/bin/bash echo "What's the answer to the ultimate question of life, the universe, and everything?" read -p "Answer: " number # We dereference variables using the $ operator echo "Your answer: $number computing..." # if statement # The double brackets are necessary, whenever we want to calculate the value of an expression or subexpression, we have to use double brackets, imagine you have selective double vision. if (( number == 42 )) then echo "Correct!" # This notation, even though it's more easily readable, is rarely used. elif (( number == 41 || number == 43 )); then echo "So close!" # This is a more common approach else echo "Incorrect, you will have to wait 7 and a half million years for the answer!" fi

Now, there is an alternative way of writing flow control which is actually more common than if statements. As we all know Boolean operators can be used for the sole purpose of generating side-effects, something like:

++a && b++

Which means that we first increment a, and then depending on the language we're using, we check if the value of the expression evaluates to True (generally if an integer is >0 or =/=0 it means its boolean value is True). And if it is True, then we increment b.

This concept is called conditional execution and is used very commonly in bash scripting, for example:

#!/bin/bash # Regular if notation echo "Checking if project is generated..." # Very important note, the whitespace between `[` and `-d` is absolutely essential # If you remove it, it'll cause a compilation error if [ -d project_dir ] then echo "Dir already generated." else echo "No directory found, generating..." mkdir project_dir fi

This can be rewritten using a conditional execution:

echo "Checking if project is generated..." [ -d project_dir ] || mkdir project_dir

Or, we can take it even further with nested expressions:

echo "Checking if project is generated..." [ -d project_dir ] || (echo "No directory found, generating..." && mkdir project_dir)

Then again, nesting expressions can lead down a rabbit hole and can become extremely convoluted and unreadable, so it's not advised to nest more than two expressions at most.

You might be confused by the weird [ -d ] notation used in the code snippet above, and you're not alone.

The reasoning behind this is that originally conditional statements in Bash were written using the test [EXPRESSION] command. But when people started writing conditional expressions in brackets, Bash followed, albeit with a very unmindful hack, by just remapping the [ character to the test command, with the ] signifying the end of the expression, most likely implemented after the fact.

Because of this, we can use the command test -d FILENAME which checks if the provided file exists and is a directory, like this [ -d FILENAME ].

Regular Expressions

Regular expressions (regex for short) give us an easy way to generalize our code. Or rather to repeat an action for a specific subset of files that meet certain criteria. We'll cover some regex basics and a few examples in the code snippet below.

Note: When we say that an expression catches ( -> ) a word, it means that the specified word is in the subset of words that the regular expression defines:

# Literal characters just signify those same characters StackAbuse -> StackAbuse sTACKaBUSE -> sTACKaBUSE # The or (|) operator is used to signify that something can be either one or other string Stack|Abuse -> Stack -> Abuse Stack(Abuse|Overflow) -> StackAbuse -> StackOverflow # The conditional (?) operator is used to signify the potential occurrence of a string The answer to life the universe and everything is( 42)?... -> The answer to life the universe and everything is... -> The answer to life the universe and everything is 42... # The * and + operators tell us how many times a character can occur # * indicates that the specified character can occur 0 or more times # + indicates that the specified character can occur 1 or more times He is my( great)+ uncle Brian. -> He is my great uncle Brian. -> He is my great great uncle Brian. # The example above can also be written like this: He is my great( great)* uncle Brian.

This is just the bare minimum you need for the immediate future with Makefile. Though, on the long term, learning Regular Expressions is a really good idea.

Target Notation

After all of this, now we can finally get into the meat of the Makefile syntax. Target notation is just a way of representing all the dependencies that exist between our source files.

Let's look at an example that has the same file structure as the example from the beginning of the article:

# First of all, all pyc (compiled .py files) are dependent on their source code counterparts main_program.pyc: main_program.py python compile.py $< math.pyc: math.py python compile.py $< draw.pyc: draw.py python compile.py $< # Then we can implement our custom dependencies main_program.pyc: main_program.py math.pyc draw.pyc python compile.py $< math.pyc: math.py floating_point_calc.py integer_calc.py python compile.py $< draw.pyc: draw.py opengl.py python compile.py $<

Keep in mind that the above is just for the sake of clarifying how the target notation works. It's very rarely used in Python projects like this, because the difference in performance is in most cases negligible.

More often than not, Makefiles are used to set up a project, clean it up, maybe provide some help and test your modules. The following is an example of a much more realistic Python project Makefile:

# Signifies our desired python version # Makefile macros (or variables) are defined a little bit differently than traditional bash, keep in mind that in the Makefile there's top-level Makefile-only syntax, and everything else is bash script syntax. PYTHON = python3 # .PHONY defines parts of the makefile that are not dependant on any specific file # This is most often used to store functions .PHONY = help setup test run clean # Defining an array variable FILES = input output # Defines the default target that `make` will to try to make, or in the case of a phony target, execute the specified commands # This target is executed whenever we just type `make` .DEFAULT_GOAL = help # The @ makes sure that the command itself isn't echoed in the terminal help: @echo "---------------HELP-----------------" @echo "To setup the project type make setup" @echo "To test the project type make test" @echo "To run the project type make run" @echo "------------------------------------" # This generates the desired project file structure # A very important thing to note is that macros (or makefile variables) are referenced in the target's code with a single dollar sign ${}, but all script variables are referenced with two dollar signs $${} setup: @echo "Checking if project files are generated..." [ -d project_files.project ] || (echo "No directory found, generating..." && mkdir project_files.project) for FILE in ${FILES}; do \ touch "project_files.project/$${FILE}.txt"; \ done # The ${} notation is specific to the make syntax and is very similar to bash's $() # This function uses pytest to test our source files test: ${PYTHON} -m pytest run: ${PYTHON} our_app.py # In this context, the *.project pattern means "anything that has the .project extension" clean: rm -r *.project

With that in mind, let's open up the terminal and run the Makefile to help us out with generating and compiling a Python project:

Conclusion

Makefile and make can make your life much easier, and can be used with almost any technology or language.

It can automate most of your building and testing, and much more. And as can be seen from the example above, it can be used with both interpreted and compiled languages.

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #16: Thinking in Pandas: Python Data Analysis the Right Way

Planet Python - Fri, 2020-07-03 08:00

Are you using the Python library Pandas the right way? Do you wonder about getting better performance, or how to optimize your data for analysis? What does normalization mean? This week on the show we have Hannah Stepanek to discuss her new book "Thinking in Pandas".

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

CubicWeb: Release of CubicWeb 3.28

Planet Python - Fri, 2020-07-03 05:21

Hello CubicWeb community,

It is with pleasure (and some delay) that we are proud to annonce the release of CubicWeb 3.28.

The big highlights of this release are:

  • CubicWeb handle content negociation. You can have get entity as RDF when requested in the Accept HTTP Headers (see this commit for instance)
  • CubicWeb has a new dynamic database connection pooler, which replaces the old static one. (see this commit for instance).
  • RQL resultsets now store the variables names used in the RQL Select queries. It should ease the use of rsets and will allow to build better tools (see this commit)
  • CubicWeb now requires python 3.6 as a mimimum.
  • A big upgrade in our CI workflow has been done, both for tests and documentation.
  • The development of CubicWeb has moved to Logilab's heptapod forge.

To get more details about what has been added, modified or removed, you can have a look to the complete changelog published in Cubicweb's documentation.

CubicWeb 3.28 has been published :

CubicWeb 3.29 is now on it way. We will have tomorrow (July 3rd 2020) afternoon a v-sprint (friday-sprint) to work on the documentation of CubicWeb and its satelites. See you there !

Categories: FLOSS Project Planets

Python Bytes: #188 Will there be a "switch" in Python the language?

Planet Python - Fri, 2020-07-03 04:00
<p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://t.co/AKfVKcveg6?amp=1"><strong>Brian’s pytest book</strong></a></li> </ul> <p><strong>Michael #1:</strong> <a href="https://medium.com/@MattGosden/tutorial-using-pythons-unsync-library-to-make-an-asynchronous-trading-bot-9ee2ae881272"><strong>Making a trading bot asynchronous using Python’s “unsync” library</strong></a></p> <ul> <li>by <a href="https://twitter.com/MattGosden/status/1272222637851377666">Matt Gosden</a></li> <li><strong>The older way</strong> — using the <strong>threading</strong> and <strong>multiprocessing</strong> libraries</li> <li><strong>The newer way</strong> — using <code>async</code> and <code>await</code> from the <strong>asyncio</strong> library embedded into core Python from 3.7 onwards</li> <li><strong>The easier way (I think)</strong>— using the <code>@unsync</code> decorator from the Python <strong>unsync</strong> library</li> <li>Somewhat realistic example worth looking at.</li> <li>Could discuss scalability more</li> <li>Also, proper def async and asyncio.sleep() for those playing at home</li> <li>But its absence kind shows unsync winning anyway. 🙂 It does work, right?</li> </ul> <p><strong>Brian #2:</strong> <strong><em>*<a href="https://fberriman.com/2020/01/22/fruit-salad-a-scrum-estimation-scale/"></strong>Fruit salad scrum estimation scale</em>*</a></p> <ul> <li>From twitter question by Lacy Henschel, answered by Kathleen Jones</li> <li>Fruit related to work <ul> <li>how easy</li> <li>potential for mess </li> <li>how many seeds, possible problems</li> <li>does it need divided</li> </ul></li> <li>The scale <ul> <li>1 - grape - trivial</li> <li>2 - apple - may take a bit of time but everyone knows how to divide it</li> <li>3 - cherry - easy but with some unknowns (what do you do with the pit?)</li> <li>5 - pineapple - somewhat undefined, no major unknowns, still a lot of work (lots of opinions on how to cut it)</li> <li>8 - watermelon - lots of work, some unknowns, messy (don’t know what you are getting into until you cut it open)</li> <li>?? - tomato - unknown task, needs more info before estimating (doesn’t belong in a fruit salad)</li> <li>?? - avacado - not scopable, probably urgent (goes bad quickly)</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://mathtocode.com/"><strong>Math to Code</strong></a></p> <ul> <li><strong>Math to Code</strong> is an interactive Python tutorial to teach engineers how to read and implement math using the NumPy library.</li> <li>by <a href="https://thommeret.com/">vernon thommeret</a></li> <li>Nice flashcard style of learning the building blocks of np for standard math</li> <li>Give it a try, solutions if you get stuck</li> <li>Python and NP together</li> <li><a href="https://github.com/vthommeret/mathtocode">Source at github</a></li> <li>Interesting building blocks</li> <li><a href="https://github.com/skulpt/skulpt">Skulpt</a> for interpreting Python</li> <li><a href="https://github.com/ebertmi/skulpt_numpy">Skulpt NumPy</a> for a subset of NumPy</li> <li><a href="https://github.com/KaTeX/KaTeX">KaTex</a> for rendering LaTeX</li> <li><a href="https://github.com/vercel/next.js">Next.js</a> for frontend framework</li> <li><a href="https://github.com/tailwindcss/tailwindcss">Tailwind CSS</a> for styling</li> <li><a href="https://github.com/remarkjs/remark">remark</a> for rendering Markdown questions</li> <li><a href="https://github.com/jonschlinkert/gray-matter">gray-matter</a> for extracting Markdown frontmatter</li> <li><a href="https://realfavicongenerator.net/">RealFavIconGenerator</a> for generating favicons</li> </ul> <p><strong>Brian #4:</strong> <a href="https://www.python.org/dev/peps/pep-0622/"><strong>PEP 622 -- Structural Pattern Matching</strong></a></p> <ul> <li>Draft status, targeted for Python 3.10</li> <li>Syntax looks similar to switch/case statement, even though two switch PEPs were rejected earlier</li> <li>Designed not only to optimize if/elif/else statements but also to focus on sequence, mapping, and object destructuring. </li> <li>match/case statement with many allowed patterns: <ul> <li>literal pattern: would then act similar to a switch/case statement</li> <li>name pattern: assigns expression to new variable if previous case doesn’t succeed</li> <li>constant value pattern: enums, similar to literal</li> <li>sequence pattern: works like unpacking assignment</li> <li>mapping pattern: like sequence unpacking, but for mappings, like dictionaries</li> <li>class pattern: create objects for each case and call <code>__match__()</code></li> <li>combining patterns: <code>|</code> for multiple patterns. including binding patterns like name</li> <li>guards: <code>if expression</code> to further clarify a case</li> <li>named sub-patterns: ok. still getting my head around this</li> </ul></li> </ul> <p><strong>Michael #5:</strong> <a href="https://aws.amazon.com/about-aws/whats-new/2020/06/introducing-aws-codeartifact-a-fully-managed-software-artifact-repository-service/"><strong>CodeArtifact from AWS</strong></a></p> <ul> <li>via Tormod Macleod</li> <li>AWS CodeArtifact is a fully managed software artifact repository service that makes it easy for organizations of any size to securely store, publish, and share packages used in their software development process</li> <li>AWS CodeArtifact works with commonly used package managers and build tools such as Maven and Gradle (Java), npm and yarn (JavaScript), pip and twine (Python), making it easy to integrate CodeArtifact into your existing development workflows.</li> <li>Can be configured to automatically fetch software packages from public artifact repositories such as npm public registry, Maven Central, and Python Package Index (PyPI), ensuring teams have reliable access to the most up-to-date packages.</li> </ul> <p><strong>Brian #6:</strong> <a href="https://www.pyinvoke.org/"><strong>invoke</strong></a></p> <ul> <li>suggested by Joreg Benesch</li> <li>replacement for Makefiles</li> <li>Confusion: <ul> <li>documentation is at <a href="http://pyinvoke.org">pyinvoke.org</a></li> <li>install with <code>pip install invoke</code></li> <li>there’s also another pypi package, called <code>pyinvoke</code>, which is NOT what we are talking about.</li> </ul></li> <li>invoke: <ul> <li>task execution library</li> <li>Write <code>tasks.py</code> files in Python for Makefile like things</li> <li>tasks are Python functions decorated with <code>@task</code>, like</li> </ul></li> </ul> <p>``<code> @task def build(c, clean=False): if clean: print("Cleaning!") print("Building!") - invoke tasks with</code>invoke<code> $ invoke build -c $ invoke build --clean - you can - run shell commands with</code>c.run()` - declare pre-tasks, tasks that need to run before this one. like “build” requires “clean”, etc. - namespaces with multiple files - tool intended for building documentation, but could probably run lots of stuff with it, like deployment, testing, etc.</p> <p>Extras:</p> <p>Brian:</p> <ul> <li><p>Michael:</p></li> <li><p><a href="https://twitter.com/gvanrossum/status/1270487300099551232">From Guido</a>: Python 3.9.0 beta 3 is out now, for your immediate testing. Wait, what happened to beta 2? Interesting story. </p></li> <li>The next pre-release, the fourth beta release of Python 3.9, will be 3.9.0b4. It is currently scheduled for 2020-06-29.</li> </ul> <p>Joke:</p> <ul> <li><a href="http://geek-and-poke.com/geekandpoke/2012/11/28/parenting-a-geek.html"><strong>Parenting a geek</strong></a> <img src="https://trello-attachments.s3.amazonaws.com/58e3f7c543422d7f3ad84f33/5ee2b071b2427c53fec09f10/0f0c01ec786838aab418d385a686ad17/Screen_Shot_2020-06-11_at_1.54.07_PM.png" alt="" /></li> </ul>
Categories: FLOSS Project Planets

Python Bytes: #188 Will the be a "switch" in Python the language?

Planet Python - Fri, 2020-07-03 04:00
<p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://t.co/AKfVKcveg6?amp=1"><strong>Brian’s pytest book</strong></a></li> </ul> <p><strong>Michael #1:</strong> <a href="https://medium.com/@MattGosden/tutorial-using-pythons-unsync-library-to-make-an-asynchronous-trading-bot-9ee2ae881272"><strong>Making a trading bot asynchronous using Python’s “unsync” library</strong></a></p> <ul> <li>by <a href="https://twitter.com/MattGosden/status/1272222637851377666">Matt Gosden</a></li> <li><strong>The older way</strong> — using the <strong>threading</strong> and <strong>multiprocessing</strong> libraries</li> <li><strong>The newer way</strong> — using <code>async</code> and <code>await</code> from the <strong>asyncio</strong> library embedded into core Python from 3.7 onwards</li> <li><strong>The easier way (I think)</strong>— using the <code>@unsync</code> decorator from the Python <strong>unsync</strong> library</li> <li>Somewhat realistic example worth looking at.</li> <li>Could discuss scalability more</li> <li>Also, proper def async and asyncio.sleep() for those playing at home</li> <li>But its absence kind shows unsync winning anyway. 🙂 It does work, right?</li> </ul> <p><strong>Brian #2:</strong> <strong><em>*<a href="https://fberriman.com/2020/01/22/fruit-salad-a-scrum-estimation-scale/"></strong>Fruit salad scrum estimation scale</em>*</a></p> <ul> <li>From twitter question by Lacy Henschel, answered by Kathleen Jones</li> <li>Fruit related to work <ul> <li>how easy</li> <li>potential for mess </li> <li>how many seeds, possible problems</li> <li>does it need divided</li> </ul></li> <li>The scale <ul> <li>1 - grape - trivial</li> <li>2 - apple - may take a bit of time but everyone knows how to divide it</li> <li>3 - cherry - easy but with some unknowns (what do you do with the pit?)</li> <li>5 - pineapple - somewhat undefined, no major unknowns, still a lot of work (lots of opinions on how to cut it)</li> <li>8 - watermelon - lots of work, some unknowns, messy (don’t know what you are getting into until you cut it open)</li> <li>?? - tomato - unknown task, needs more info before estimating (doesn’t belong in a fruit salad)</li> <li>?? - avacado - not scopable, probably urgent (goes bad quickly)</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://mathtocode.com/"><strong>Math to Code</strong></a></p> <ul> <li><strong>Math to Code</strong> is an interactive Python tutorial to teach engineers how to read and implement math using the NumPy library.</li> <li>by <a href="https://thommeret.com/">vernon thommeret</a></li> <li>Nice flashcard style of learning the building blocks of np for standard math</li> <li>Give it a try, solutions if you get stuck</li> <li>Python and NP together</li> <li><a href="https://github.com/vthommeret/mathtocode">Source at github</a></li> <li>Interesting building blocks</li> <li><a href="https://github.com/skulpt/skulpt">Skulpt</a> for interpreting Python</li> <li><a href="https://github.com/ebertmi/skulpt_numpy">Skulpt NumPy</a> for a subset of NumPy</li> <li><a href="https://github.com/KaTeX/KaTeX">KaTex</a> for rendering LaTeX</li> <li><a href="https://github.com/vercel/next.js">Next.js</a> for frontend framework</li> <li><a href="https://github.com/tailwindcss/tailwindcss">Tailwind CSS</a> for styling</li> <li><a href="https://github.com/remarkjs/remark">remark</a> for rendering Markdown questions</li> <li><a href="https://github.com/jonschlinkert/gray-matter">gray-matter</a> for extracting Markdown frontmatter</li> <li><a href="https://realfavicongenerator.net/">RealFavIconGenerator</a> for generating favicons</li> </ul> <p><strong>Brian #4:</strong> <a href="https://www.python.org/dev/peps/pep-0622/"><strong>PEP 622 -- Structural Pattern Matching</strong></a></p> <ul> <li>Draft status, targeted for Python 3.10</li> <li>Syntax looks similar to switch/case statement, even though two switch PEPs were rejected earlier</li> <li>Designed not only to optimize if/elif/else statements but also to focus on sequence, mapping, and object destructuring. </li> <li>match/case statement with many allowed patterns: <ul> <li>literal pattern: would then act similar to a switch/case statement</li> <li>name pattern: assigns expression to new variable if previous case doesn’t succeed</li> <li>constant value pattern: enums, similar to literal</li> <li>sequence pattern: works like unpacking assignment</li> <li>mapping pattern: like sequence unpacking, but for mappings, like dictionaries</li> <li>class pattern: create objects for each case and call <code>__match__()</code></li> <li>combining patterns: <code>|</code> for multiple patterns. including binding patterns like name</li> <li>guards: <code>if expression</code> to further clarify a case</li> <li>named sub-patterns: ok. still getting my head around this</li> </ul></li> </ul> <p><strong>Michael #5:</strong> <a href="https://aws.amazon.com/about-aws/whats-new/2020/06/introducing-aws-codeartifact-a-fully-managed-software-artifact-repository-service/"><strong>CodeArtifact from AWS</strong></a></p> <ul> <li>via Tormod Macleod</li> <li>AWS CodeArtifact is a fully managed software artifact repository service that makes it easy for organizations of any size to securely store, publish, and share packages used in their software development process</li> <li>AWS CodeArtifact works with commonly used package managers and build tools such as Maven and Gradle (Java), npm and yarn (JavaScript), pip and twine (Python), making it easy to integrate CodeArtifact into your existing development workflows.</li> <li>Can be configured to automatically fetch software packages from public artifact repositories such as npm public registry, Maven Central, and Python Package Index (PyPI), ensuring teams have reliable access to the most up-to-date packages.</li> </ul> <p><strong>Brian #6:</strong> <a href="https://www.pyinvoke.org/"><strong>invoke</strong></a></p> <ul> <li>suggested by Joreg Benesch</li> <li>replacement for Makefiles</li> <li>Confusion: <ul> <li>documentation is at <a href="http://pyinvoke.org">pyinvoke.org</a></li> <li>install with <code>pip install invoke</code></li> <li>there’s also another pypi package, called <code>pyinvoke</code>, which is NOT what we are talking about.</li> </ul></li> <li>invoke: <ul> <li>task execution library</li> <li>Write <code>tasks.py</code> files in Python for Makefile like things</li> <li>tasks are Python functions decorated with <code>@task</code>, like</li> </ul></li> </ul> <p>``<code> @task def build(c, clean=False): if clean: print("Cleaning!") print("Building!") - invoke tasks with</code>invoke<code> $ invoke build -c $ invoke build --clean - you can - run shell commands with</code>c.run()` - declare pre-tasks, tasks that need to run before this one. like “build” requires “clean”, etc. - namespaces with multiple files - tool intended for building documentation, but could probably run lots of stuff with it, like deployment, testing, etc.</p> <p>Extras:</p> <p>Brian:</p> <ul> <li><p>Michael:</p></li> <li><p><a href="https://twitter.com/gvanrossum/status/1270487300099551232">From Guido</a>: Python 3.9.0 beta 3 is out now, for your immediate testing. Wait, what happened to beta 2? Interesting story. </p></li> <li>The next pre-release, the fourth beta release of Python 3.9, will be 3.9.0b4. It is currently scheduled for 2020-06-29.</li> </ul> <p>Joke:</p> <ul> <li><a href="http://geek-and-poke.com/geekandpoke/2012/11/28/parenting-a-geek.html"><strong>Parenting a geek</strong></a> <img src="https://trello-attachments.s3.amazonaws.com/58e3f7c543422d7f3ad84f33/5ee2b071b2427c53fec09f10/0f0c01ec786838aab418d385a686ad17/Screen_Shot_2020-06-11_at_1.54.07_PM.png" alt="" /></li> </ul>
Categories: FLOSS Project Planets

Test and Code: 120: FastAPI &amp; Typer - Sebastián Ramírez

Planet Python - Fri, 2020-07-03 03:00

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python based on standard Python type hints.
Typer is a library for building CLI applications, also based on Python type hints.
Type hints and many other details are intended to make it easier to develop, test, and debug applications using FastAPI and Typer.

The person behind FastAPI and Typer is Sebastián Ramírez.

Sebastián is on the show today, and we discuss:

  • FastAPI
  • Rest APIs
  • Swagger UI
  • Future features of FastAPI
  • Starlette
  • Typer
  • Click
  • Testing with Typer and Click
  • Typer autocompletion
  • Typer CLI

Special Guest: Sebastián Ramírez.

Sponsored By:

Support Test & Code : Python Testing for Software Engineering

Links:

<p>FastAPI is a modern, fast (high-performance), web framework for building APIs with Python based on standard Python type hints.<br> Typer is a library for building CLI applications, also based on Python type hints.<br> Type hints and many other details are intended to make it easier to develop, test, and debug applications using FastAPI and Typer.</p> <p>The person behind FastAPI and Typer is Sebastián Ramírez.</p> <p>Sebastián is on the show today, and we discuss:</p> <ul> <li>FastAPI</li> <li>Rest APIs</li> <li>Swagger UI</li> <li>Future features of FastAPI</li> <li>Starlette</li> <li>Typer</li> <li>Click</li> <li>Testing with Typer and Click</li> <li>Typer autocompletion</li> <li>Typer CLI</li> </ul><p>Special Guest: Sebastián Ramírez.</p><p>Sponsored By:</p><ul><li><a href="https://testandcode.com/pycharm" rel="nofollow">PyCharm Professional</a>: <a href="https://testandcode.com/pycharm" rel="nofollow">Try PyCharm Pro for 4 months and learn how PyCharm will save you time.</a> Promo Code: TESTANDCODE20</li></ul><p><a href="https://www.patreon.com/testpodcast" rel="payment">Support Test & Code : Python Testing for Software Engineering</a></p><p>Links:</p><ul><li><a href="https://explosion.ai/" title="Explosion" rel="nofollow">Explosion</a></li><li><a href="https://fastapi.tiangolo.com/" title="FastAPI" rel="nofollow">FastAPI</a></li><li><a href="https://typer.tiangolo.com/" title="Typer" rel="nofollow">Typer</a></li><li><a href="https://swagger.io/specification/" title="OpenAPI Specification " rel="nofollow">OpenAPI Specification </a></li><li><a href="https://json-schema.org/" title="JSON Schema" rel="nofollow">JSON Schema</a></li><li><a href="https://oauth.net/2/" title="OAuth 2.0" rel="nofollow">OAuth 2.0</a></li><li><a href="https://www.starlette.io/" title="Starlette" rel="nofollow">Starlette</a></li><li><a href="https://pydantic-docs.helpmanual.io/" title="pydantic" rel="nofollow">pydantic</a></li><li><a href="https://swagger.io/tools/swagger-ui/" title="Swagger UI" rel="nofollow">Swagger UI</a> &mdash; REST API Documentation Tool</li><li><a href="https://typer.tiangolo.com/tutorial/testing/" title="Testing - Typer" rel="nofollow">Testing - Typer</a></li><li><a href="https://click.palletsprojects.com/en/7.x/" title="Click" rel="nofollow">Click</a></li><li><a href="https://click.palletsprojects.com/en/7.x/testing/" title="Testing Click Applications" rel="nofollow">Testing Click Applications</a></li><li><a href="https://typer.tiangolo.com/tutorial/options/autocompletion/" title="CLI Option autocompletion - Typer" rel="nofollow">CLI Option autocompletion - Typer</a></li><li><a href="https://typer.tiangolo.com/typer-cli/" title="Typer CLI - completion for small scripts" rel="nofollow">Typer CLI - completion for small scripts</a></li></ul>
Categories: FLOSS Project Planets

GSoC’20 First Evaluation

Planet KDE - Thu, 2020-07-02 20:00

Hello everyone,

In the last blog, I wrote about my first two weeks on the GSoC period. In this blog, I would write about the activities to which I have worked further and implemented multiple datasets.

As till now, I have implemented the multiple datasets to the following activities:

  1. Enumeration memory games
  2. Addition memory games with Tux and without Tux
  3. Subtraction memory games with Tux and without Tux
  4. Multiplication memory games with Tux and without Tux
Why multiple datasets to GCompris activities?

As previously all of the activities were having a generalized dataset so for some of the age groups as for 3-5 yrs the activity seems quite difficult to play, and also for some of the age groups the activity seems to be quite easy. So, multiple datasets help in resolving this issue and we have multiple data for various age groups and all the activities can be more adaptive for the children.

Enumeration memory games

In this activity, the child needs to turn the cards to count the images and match with the corresponding number cards. I started my work of implementing multiple datasets to memory activities with this activity. To implement multiple datasets to memory activities I need to change the logic of the code to support the default datasets and the multiple datasets too. I made the required changes to the code and after implementing multiple datasets there was some issue in case of just two images so I updated the condition too which checks that the resources as images, sounds, texts are enough to load a particular level or not. There are in total of 8 multiple datasets for this activity.
Below image shows the dataset screen for Enumeration memory games activity

After implementing multiple datasets to this activity there was a regression that affected all memory activities as a blank activity config was displayed for activities with no multiple datasets. I fixed this too and pushed the changes to my branch for review. Once I have done everything perfectly and tested it I made a merge request from my working branch so that it can be merged to master.

Addition memory games

In this activity, the child needs to turn the cards over to find two numbers which add up the same, until all the cards are gone. The goal of this activity to practice addition. As after implementation of multiple datasets to enumeration memory games it’s not that difficult to implement multiple datasets to other memory activities as just we need to create different Data.qml file in resource directory and load the dataset. For this activity we need to use the function getAddTable() implemented in math_util.js and pass the arguments for the respective numbers. In this activity there are total of 10 multiple datasets. This activity has two mode as other Addition memory games with Tux activity so I have implemented multiple datasets to this too. The dataset content of the activity Addition memory games with Tux is same as that of without Tux. Once I have implemented the datasets to both mode of addition memory games, I made a merge request from my working branch.

Code of Data.qml file

import GCompris 1.0 import "qrc:/gcompris/src/activities/memory/math_util.js" as Memory Data { objective: qsTr("Addition table of 1.") difficulty: 4 data: [ { // Level 1 columns: 5, rows: 2, texts: Memory.getAddTable(1) } ] }

As it can been seen that the content of the dataset is in json format. And to just implement anoother multiple dataset we need to change the number from 1 to 2 and update the goal. Quite easy to implement all the ten multiple datasets :)
Below image shows the dataset screen for Addition memory games activity

Subtraction memory games

In this activity, the child needs to turn the cards over to find two numbers which subtract up the same, until all the cards are gone. The goal of this activity to practice subtraction. The dataset implementation procedure is same as that of addition memory games just we need to use the getSubTable() function from math_util.js. This activity also have two modes so I have implemented multiple datasets to both mode.
Below image shows the dataset screen for Subtraction memory games activity

Multiplication memory games

In this activity, the child needs to turn the cards over to find two numbers which multiply up the same, until all the cards are gone. The goal of this activity to practice multiplication. The dataset implementation procedure is same as that of addition and subtraction memory games just we need to use the getMulTable() function from math_util.js. This activity also have two modes so I have implemented multiple datasets to both mode.
Below image shows the dataset screen for Multiplication memory games activity

What’s next?

As I mentioned I have during the first evaluation coding period I have implemented multiple datasets to the above-mentioned activities. I tested them manually and also got it tested by my younger sister :). After review of all my merge requests from mentors they all were good and have been merged to master branch.

I will further work on the implementation of multiple datasets to division memory games activity and other memory activities. As our project is moving to Gitlab now by the KDE administrator so I will be pushing all of my further works there only on the respective branch of the activity.

Regards
Deepak Kumar

Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 150 released

Planet Debian - Thu, 2020-07-02 20:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 150. This version includes the following changes:

[ Chris Lamb ] * Don't crash when listing entries in archives if they don't have a listed size (such as hardlinks in .ISO files). (Closes: reproducible-builds/diffoscope#188) * Dump PE32+ executables (including EFI applications) using objdump. (Closes: reproducible-builds/diffoscope#181) * Tidy detection of JSON files due to missing call to File.recognizes that checks against the output of file(1) which was also causing us to attempt to parse almost every file using json.loads. (Whoops.) * Drop accidentally-duplicated copy of the new --diff-mask tests. * Logging improvements: - Split out formatting of class names into a common method. - Clarify that we are generating presenter formats in the opening logs. [ Jean-Romain Garnier ] * Remove objdjump(1) offsets before instructions to reduce diff noise. (Closes: reproducible-builds/diffoscope!57)

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

KSnip and Spectacle

Planet KDE - Thu, 2020-07-02 18:00

I have two screenshot applications installed – KSnip and Spectacle – because they offer different, and independently useful, functionality. Here’s some notes on what each does well.

Spectacle has been in the FreeBSD ports collection for some time now, since it ships as part of the KDE release service.

KSnip was recently added to the FreeBSD ports collection: we already had KColorPicker as a dependency for Spectacle, so packaging up the rest of the stack from Damir was a natural next-step.

KSnip

The biggest reason I have for KSnip is the multiple-screenshots-in-tabs feature it has. Like any other document reader, it has tabs and you can switch between them relatively quickly. I use this particularly for keeping track of visual changes while developing Calamares: by screenshotting the Calamares window repeatedly, I can see what changes (for instance, while doing screen-margin tweaks for mobile).

Switching back-and-forth between the tabs gives me a “pixels moved” sense, and that’s really useful. KSnip’s wide selection of annotation tools – it’s nearly a specialized drawing application – helps, too: I tell people to draw big red arrows on screenshots pointing to problems (because describing things is difficult, and a glaring visual glitch to you may be totally invisible to me).

With KSnip, adding detail to a screenshot is child’s play.

That’s not to say that KSnip doesn’t have its issues. But a blog post is not a place to complaing about someone else’s Free Software: the issue tracker is (with constructive bug reports, not complaints).

Spectacle

Spectacle on the other hand integrates more nicely with my Plasma desktop on the whole, can screenshot pop-ups and tooltips and understands weird screen geometry.

I end up using Spectacle more for the “quick screenshot needed” part of writing blogs, and sometimes for sharing bits of screen where no annotations are needed.

With Spectacle, delivering or sharing the screenshot is simple.

Overall, I like having two applications that each do their own thing, and do their thing pretty well. Since the shared code stack is enormous, each of the two applications is only small: less than 1MB. That’s a small price for specialization.

Categories: FLOSS Project Planets

health @ Savannah: GNU Health Control Center 3.6.5 supports Weblate

GNU Planet! - Thu, 2020-07-02 17:50

Dear community

As you may know, GNU Health HMIS has migrated its translation server to Weblate.
(see news https://savannah.gnu.org/forum/forum.php?forum_id=9762)

Today, we have released the GH Control Center 3.6.5, which has support to Weblate for the language installation. The syntax is the same, and you won't notice any difference.

To update to the latest GH Control Center, run:

$ cdutil
$ ./gnuhealth-control update

That will fetch and install the latest version, and you're ready to go :)

Happy and Healthy hacking!
Luis

Categories: FLOSS Project Planets

Ben Hutchings: Debian LTS work, June 2020

Planet Debian - Thu, 2020-07-02 15:25

I was assigned 20 hours of work by Freexian's Debian LTS initiative, and worked all 20 hours this month.

I sent a final request for testing for the next update to Linux 3.16 in jessie. I also prepared an update to Linux 4.9, included in both jessie and stretch. I completed backporting of kernel changes related to CVE-2020-0543, which was still under embargo, to Linux 3.16.

Finally I uploaded the updates for Linux 3.16 and 4.9, and issued DLA-2241 and DLA-2242.

The end of June marked the end of long-term support for Debian 8 "jessie" and for Linux 3.16. I am no longer maintaining any stable kernel branches, but will continue contributing to them as part of my work on Debian 9 "stretch" LTS and other Debian releases.

Categories: FLOSS Project Planets

Reuven Lerner: Level up your Python skills with a supercharged Humble Bundle!

Planet Python - Thu, 2020-07-02 13:42

Want to improve your Python skills?

Yeah, I know. Of course you do.

Well, then you should grab an amazing deal from Humble Bundle, with content from a bunch of online Python trainers — including me!

Buying the bundle not only gives you access to some amazing Python training at a great price. It also supports the Python Software Foundation (which handles the administrative side of the Python language and ecosystem) and Race Forward (which works to improve race relations in the US).

There are three tiers to the bundle, and I have a course in each one:

  1. Comprehending Comprehensions
  2. Object-oriented Python
  3. Any one cohort of Weekly Python Exercise

Included in the bundles are also courses and books from Michael Kennedy, Trey Hunner, Matt Harrison, PyBites (Bob and Julian), Real Python (Dan Bader), and Cory Althoff. Plus it includes a subscription to the PyCharm editor.

So don’t delay! Sign up for this Humble Bundle, improve your Python, help two good causes, and save some money. But it’s only available for another 20 days, so don’t delay!

Sign up here: https://www.humblebundle.com/software/python-programming-software

The post Level up your Python skills with a supercharged Humble Bundle! appeared first on Reuven Lerner.

Categories: FLOSS Project Planets

Daniel Roy Greenfeld: I'm Teaching A Live Online Django Crash Course

Planet Python - Thu, 2020-07-02 12:38

Course Announcement

On July 16th and 17th of 2020, I'll be running a live instruction of my beginner-friendly Django Crash Course. This is a live interactive class conducted via Zoom conferencing software. We're going to walk through the book together with students. If you get stuck, there will be at least two members of the Feldroy team available to help.

Each course day will have two sessions each 3 hours long, as well as an hour-long break between sessions.

Attendees Receive
  • Hours of instruction in building web apps by noted authors and senior programmers
  • An invite to both July 16th and July 17th class days
  • The Django Crash Course e-book (if you already bought one, we'll send you a discount code for $19.99 off the online class)
  • Membership in our forthcoming online forums when they are activated
Class Prerequisites
  • Basic knowledge of the Python programming language
  • Computer where you are allowed to install software (No work restrictions)
  • Internet fast enough to join online meetings
Topics Covered
  • Setting up a development environment
  • Cookiecutter for rapidly accelerating development
  • Django
    • Forms
    • Class-Based Views
    • Models
    • Templates
    • Admin
  • Writing Django tests
    • PyTest
    • Factories
  • Best practices per Two Scoops of Django
    • Proven patterns for avoiding duplication of work (DRY)
    • Writing maintainable code
    • More secure projects

We're selling the course for the introductory price of just $99 and space is limited, so register today!

Categories: FLOSS Project Planets

Gábor Hojtsy: Learn about and shape the future of Drupal at DrupalCon Global

Planet Drupal - Thu, 2020-07-02 12:24

Drupal 9 was just released last month, and in less than two weeks we get together to celebrate it (again), learn, grow and plan together for the future at DrupalCon Global.

I presented my "State of Drupal 9" talk at various events for over a year now, and while the original direction of questions were about how the transition would work, lately it is more about what else can we expect from Drupal 9 and then Drupal 10. This is a testament and proof to the continuous upgrade path we introduced all the way back in 2017. Now that Drupal 9.0 is out, we can continue to fill the gaps and add new exciting capabilities to Drupal core.

DrupalCon Global will have various exciting events and opportunities to learn about and help shape the future of Drupal 9 and even Drupal 10. Tickets are $249 and get you access to all session content, summits and BoF discussions. As usual, contributions do not require a ticket and will happen all week as well, including a dedicated contribution day on Friday. Here is a sampling of all content elements discussing, planning on and even building the future of Drupal.

Sessions about the future of Drupal
Photo by Austin Distel on Unsplash

First there is the Driesnote of course. Dries will share the result of the Drupal 2020 Product Survey and discuss plans for Drupal 10. There is a followup Q&A session to discuss the keynote and other topics with Dries live.

The Drupal Initiatives Plenary coordinated by yours truly is going to feature various important leaders in our community working on diversity and inclusion, accessibility, events, mentoring, promotion as well as core components like the Claro admin theme and the Olivero frontend theme. This is the best way to get an overview of how Drupal's teams work, what are their plans and challenges. Even better, the plenary session is followed by a BoF where we can continue the discussion in a more interactive form.

In Drupal Core markup in continuous upgrade path Lauri Eskola will dive into why the deprecation process used for PHP and JavaScript code is not workable for HTML and CSS. This informs the direction of where markup is going in Drupal 9 and 10 onwards.

In the Drupal.org Panel the Drupal Association team discusses how key initiatives are supported on Drupal.org including Composer, Automatic Updates and even Merge Requests for Drupal contribution and plans for the future.

Mike Baynton and David Strauss will discuss Automatic updates in action and in depth showing what is possible now and what are the future plans.

There is not one but two sessions about the new proposed frontend theme. In The Olivero theme: Turning a wild idea into a core initiative Mike Herchel and Putra Bonaccorsi discusses the whole history and future plans while in Designing for chaos: The design process behind Olivero will cover the design specifically.

Moshe Weitzman leads a core conversation to take stock of the current command line tools for Drupal and discuss what a more complete core solution would look like in A robust command line tool for all Drupal sites.

In Let’s Make Drupal Core Less Complicated Ted Bowman will propose ways to simplify Drupal core for existing uses and to achieve an easier learning curve.

Finally Drupal 9: New Initiatives for Drupal offers a chance to discuss new initiatives proposed by Dries in the Driesnote. If you are interested to join in either or discuss the plans, this is your opportunity!

Birds of a Feather discussions about the future of Drupal

Attendees with tickets for DrupalCon Global will be able to participate in live discussions about key topics. BoF submission is open, so this list will possibly grow as time goes.

Ofer Shaal leads a discussion titled Standardize Rector rules as part of Drupal core deprecations to make sure the transition from Drupal 9 to 10 will be even easier than Drupal 8 to 9 is.

Submit your Birds of a Feather discussion now.

Contribute to the future of Drupal
Photo by WOCinTech Chat on Flickr

Just like in-person DrupalCons, DrupalCon Global contribution will be free to attend and does not require a ticket. The contribution spaces are especially good to go to if you are interested in the future of Drupal and making a difference.

If you've been to a DrupalCon or a DrupalCamp before, a contribution event usually involves one or more rooms with tables that have signage on them for what they are working on. This is not exactly possible online, however, we devised a system to replicate tables as groups at https://contrib2020.getopensocial.net/all-groups which allows you to see what topics will be covered and who the leads are. (Huge props to Rachel Lawson at the Drupal Association for building this out!)

If your topic is not yet there, you should create a group now. Groups indicate what they are working on and what skills they need from contributors. You should join groups you are interested to help and read their information for guidance. Teams will post group events to let you know when certain activities (introduction, review sessions, co-working on specific problems or meetings to discuss issues) will happen. Events will also be used to signify when you are most likely to find people working on the topics. The OpenSocial site is a directory of topics and events, contribution itself will happen on drupal.org with discussion on Drupal Slack for most groups.

There are already groups for Configuration Management 2.0, the Olivero theme, the Bug Smash initiative and Media. Stay tuned for more appearing as the event comes closer.

Categories: FLOSS Project Planets

Mike Gabriel: My Work on Debian LTS (June 2020)

Planet Debian - Thu, 2020-07-02 10:20

In June 2020, I have worked on the Debian LTS project for 8 hours (of 8 hours planned).

LTS Work
  • frontdesk: CVE bug triaging for Debian jessie LTS: mailman, alpine, python3.4, redis, pound, pcre3, ngircd, mutt, lynis, libvncserver, cinder, bison, batik.
  • upload to jessie-security: libvncserver (DLA-2264-1 [1], 9 CVEs)
  • upload to jessie-security: mailman (DLA-2265-1 [2], 1 CVE)
  • upload to jessie-security: mutt (DLA-2268-1 [3] and DLA-2268-2 [4]), 2 CVEs)
Other security related work for Debian
  • make sure all security fixes for php-horde-* are also in Debian unstable
  • upload freerdp2 2.1.2+dfsg-1 to unstable (9 CVEs)
References
Categories: FLOSS Project Planets

Pages