FLOSS Project Planets

Bits from Debian: Debian celebrates 26 years, Happy DebianDay!

Planet Debian - Fri, 2019-08-16 14:12

26 years ago today in a single post to the comp.os.linux.development newsgroup, Ian Murdock announced the completion of a brand new Linux release named #Debian.

Since that day we’ve been into outer space, typed over 1,288,688,830 lines of code, spawned over 300 derivatives, were enhanced with 6,155 known contributors, and filed over 975,619 bug reports.

We are home to a community of thousands of users around the globe, we gather to host our annual Debian Developers Conference #DebConf which spans the world in a different country each year, and of course today's many #DebianDay celebrations held around the world.

It's not too late to throw an impromptu #DebianDay celebration or to go and join one of the many celebrations already underway.

As we celebrate our own anniversary, we also want to celebrate our many contributors, developers, teams, groups, maintainers, and users. It is all of your effort, support, and drive that continue to make Debian truly: The universal operating system.

Happy #DebianDay!

Categories: FLOSS Project Planets

Jonathan McDowell: DebConf19: Brazil

Planet Debian - Fri, 2019-08-16 13:46

My first DebConf was DebConf4, held in Porte Alegre, Brazil back in 2004. Uncle Steve did the majority of the travel arrangements for 6 of us to go. We had some mishaps which we still tease him about, but it was a great experience. So when I learnt DebConf19 was to be in Brazil again, this time in Curitiba, I had to go. So last November I realised flights were only likely to get more expensive, that I’d really kick myself if I didn’t go, and so I booked my tickets. A bunch of life happened in the meantime that mean the timing wasn’t particularly great for me - it’s been a busy 6 months - but going was still the right move.

One thing that struck me about DC19 is that a lot of the faces I’m used to seeing at a DebConf weren’t there. Only myself and Steve from the UK DC4 group made it, for example. I don’t know if that’s due to the travelling distances involved, or just the fact that attendance varies and this happened to be a year where a number of people couldn’t make it. Nonetheless I was able to catch up with a number of people I only really see at DebConfs, as well as getting to hang out with some new folk.

Given how busy I’ve been this year and expect to be for at least the next year I set myself a hard goal of not committing to any additional tasks. That said DebConf often provides a welcome space to concentrate on technical bits. I reviewed and merged dkg’s work on WKD and DANE for the Debian keyring under debian.org - we’re not exposed to the recent keyserver network issues due to the fact the keyring is curated, but providing additional access to our keyring makes sense if it can be done easily. I spent some time with Ian Jackson talking about dgit - I’m not a user of it at present, but I’m intrigued by the potential for being able to do Debian package uploads via signed git tags. Of course I also attended a variety of different talks (and, as usual, at times the schedule conflicted such that I had a difficult choice about which option to chose for a particular slot).

This also marks the first time I did a non-team related talk at DebConf, warbling about my home automation (similar to my NI Dev Conf talk but with some more bits about the Debian involvement thrown in):

In addition I co-presented a couple of talks for teams I’m part of:

I only realised late in the week that 2 talks I’d normally expect to attend, an Software in the Public Interest BoF and a New Member BoF, were not on the schedule, but to be honest I don’t think I’d have been able to run either even if I’d realised in advance.

Finally, DebConf wouldn’t be DebConf without playing with some embedded hardware at some point, and this year it was the Caninos Loucos Labrador. This is a Brazilian grown single board ARM based computer with a modular form factor designed for easy integration into bigger projects. There;s nothing particularly remarkable about the hardware and you might ask why not just use a Pi? The reason is that import duties in Brazil make such things prohibitively expensive - importing a $35 board can end up costing $150 by the time shipping, taxes and customs fees are all taken into account. The intent is to design and build locally, as components can be imported with minimal taxes if the final product is being assembled within Brazil. And Mercosul allows access to many other South American countries without tariffs. I’d have loved to get hold of one of the boards, but they’ve only produced 1000 in the initial run and really need to get them into the hands of people who can help progress the project rather than those who don’t have enough time.

Next year DebConf20 is in Haifa - a city I’ve spent some time in before - but I’ve made the decision not to attend; rather than spending a single 7-10 day chunk away from home I’m going to aim to attend some more local conferences for shorter periods of time.

Categories: FLOSS Project Planets

Stack Abuse: Basics of Memory Management in Python

Planet Python - Fri, 2019-08-16 08:57
Introduction

Memory management is the process of efficiently allocating, de-allocating, and coordinating memory so that all the different processes run smoothly and can optimally access different system resources. Memory management also involves cleaning memory of objects that are no longer being accessed.

In Python, the memory manager is responsible for these kinds of tasks by periodically running to clean up, allocate, and manage the memory. Unlike C, Java, and other programming languages, Python manages objects by using reference counting. This means that the memory manager keeps track of the number of references to each object in the program. When an object's reference count drops to zero, which means the object is no longer being used, the garbage collector (part of the memory manager) automatically frees the memory from that particular object.

The user need not to worry about memory management as the process of allocation and de-allocation of memory is fully automatic. The reclaimed memory can be used by other objects.

Python Garbage Collection

As explained earlier, Python deletes objects that are no longer referenced in the program to free up memory space. This process in which Python frees blocks of memory that are no longer used is called Garbage Collection. The Python Garbage Collector (GC) runs during the program execution and is triggered if the reference count reduces to zero. The reference count increases if an object is assigned a new name or is placed in a container, like tuple or dictionary. Similarly, the reference count decreases when the reference to an object is reassigned, when the object's reference goes out of scope, or when an object is deleted.

The memory is a heap that contains objects and other data structures used in the program. The allocation and de-allocation of this heap space is controlled by the Python Memory manager through the use of API functions.

Python Objects in Memory

Each variable in Python acts as an object. Objects can either be simple (containing numbers, strings, etc.) or containers (dictionaries, lists, or user defined classes). Furthermore, Python is a dynamically typed language which means that we do not need to declare the variables or their types before using them in a program.

For example:

>>> x = 5 >>> print(x) 5 >>> del x >>> print(x) Traceback (most reent call last): File "<mem_manage>", line 1, in <module> print(x) NameError : name 'x' is not defined

If you look at the first 2 lines of the above program, object x is known. When we delete the object x and try to use it, we get an error stating that the variable x is not defined.

You can see that the garbage collection in Python is fully automated and the programmer does not need worry about it, unlike languages like C.

Modifying the Garbage Collector

The Python garbage collector has three generations in which objects are classified. A new object at the starting point of it's life cycle is the first generation of the garbage collector. As the object survives garbage collection, it will be moved up to the next generations. Each of the 3 generations of the garbage collector has a threshold. Specifically, when the threshold of number of allocations minus the number of de0allocations is exceeded, that generation will run garbage collection.

Earlier generations are also garbage collected more often than the higher generations. This is because newer objects are more likely to be discarded than old objects.

The gc module includes functions to change the threshold value, trigger a garbage collection process manually, disable the garbage collection process, etc. We can check the threshold values of different generations of the garbage collector using the get_threshold() method:

import gc print(gc.get_threshold())

Sample Output:

(700, 10, 10)

As you see, here we have a threshold of 700 for the first generation, and 10 for each of the other two generations.

We can alter the threshold value for triggering the garbage collection process using the set_threshold() method of the gc module:

gc.set_threshold(900, 15, 15)

In the above example, we have increased the threshold value for all the 3 generations. Increasing the threshold value will decrease the frequency of running the garbage collector. Normally, we need not think too much about Python's garbage collection as a developer, but this may be useful when optimizing the Python runtime for your target system. One of the key benefits is that Python's garbage collection mechanism handles a lot of low-level details for the developer automatically.

Why Perform Manual Garbage Collection?

We know that the Python interpreter keeps a track of references to objects used in a program. In earlier versions of Python (until version 1.6), the Python interpreter used only the reference counting mechanism to handle memory. When the reference count drops to zero, the Python interpreter automatically frees the memory. This classical reference counting mechanism is very effective, except that it fails to work when the program has reference cycles. A reference cycle happens if one or more objects are referenced each other, and hence the reference count never reaches zero.

Let's consider an example.

>>> def create_cycle(): ... list = [8, 9, 10] ... list.append(list) ... return list ... >>> create_cycle() [8, 9, 10, [...]]

The above code creates a reference cycle, where the object list refers to itself. Hence, the memory for the object list will not be freed automatically when the function returns. The reference cycle problem can't be solved by reference counting. However, this reference cycle problem can be solved by change the behavior of the garbage collector in your Python application.

To do so, we can use the gc.collect() function of the gc module.

import gc n = gc.collect() print("Number of unreachable objects collected by GC:", n)

The gc.collect() returns the number of objects it has collected and de-allocated.

There are two ways to perform manual garbage collection: time-based or event-based garbage collection.

Time-based garbage collection is pretty simple: the gc.collect() function is called after a fixed time interval.

Event-based garbage collection calls the gc.collect() function after an event occurs (i.e. when the application is exited or the application remains idle for a specific time period).

Let's understand the manual garbage collection work by creating a few reference cycles.

import sys, gc def create_cycle(): list = [8, 9, 10] list.append(list) def main(): print("Creating garbage...") for i in range(8): create_cycle() print("Collecting...") n = gc.collect() print("Number of unreachable objects collected by GC:", n) print("Uncollectable garbage:", gc.garbage) if __name__ == "__main__": main() sys.exit()

The output is as below:

Creating garbage... Collecting... Number of unreachable objects collected by GC: 8 Uncollectable garbage: []

The script above creates a list object that is referred by a variable, creatively named list. The first element of the list object refers to itself. The reference count of the list object is always greater than zero even if it is deleted or out of scope in the program. Hence, the list object is not garbage collected due to the circular reference. The garbage collector mechanism in Python will automatically check for, and collect, circular references periodically.

In the above code, as the reference count is at least 1 and can never reach 0, we have forcefully garbage collected the objects by calling gc.collect(). However, remember not to force garbage collection frequently. The reason is that even after freeing the memory, the GC takes time to evaluate the object's eligibility to be garbage collected, taking up processor time and resources. Also, remember to manually manage the garbage collector only after your app has started completely.

Conclusion

In this article, we discussed how memory management in Python is handled automatically by using reference counting and garbage collection strategies. Without garbage collection, implementing a successful memory management mechanism in Python is impossible. Also, programmers need not worry about deleting allocated memory, as it is taken care by Python memory manager. This leads to fewer memory leaks and better performance.

Categories: FLOSS Project Planets

PyCharm: PyCharm 2019.2.1 RC

Planet Python - Fri, 2019-08-16 08:48

PyCharm 2019.2.1 release candidate is available now!

Fixed in this Version
  • An issue that caused debugger functions like “Step into” to not work properly in our latest release was solved.
  • AltGr keymaps for certain characters that were not working are now fixed.
Further Improvements
  • New SQL completion suggestions of join conditions based on column or table name match and auto-inject SQL by literals.
  • Some JavaScript and Vue.js inspection issues were resolved.
  • And more, check out our release notes for more details.
Getting the New Version

Download the RC from Confluence.

The release candidate (RC) is not an early access program (EAP) build, and does not bundle an EAP license. If you get the PyCharm Professional Edition RC, you will either need a currently active PyCharm subscription, or you will receive a 30-day free trial.

Categories: FLOSS Project Planets

Spinning Code: SC DUG August 2019

Planet Drupal - Fri, 2019-08-16 07:00

After a couple months off, SC DUG met this month with a presentation on super cheap Drupal hosting.

Chris Zietlow from Mindgrub, Will Jackson from Kanopi Studios, and I all gave short talks very cheap ways to host Drupal 8.

Chris opened by talking about using AWS Micro servers. Will shared a solution using a Raspberry Pi for a fully wireless server. I closed the discussion with a review of using Drupal Tome on Netlify.

We all worked from a loose set of rules to help keep us honest and prevent overlapping:

Rules for Cheap D8 Hosting Challenge

The goal is to figure out the cheapest D8 hosting that would actually function for a project, even if it is deeply irresponsible to actually use.

Rules
  1. It has to actually work for D8 (so modern PHP version, working database, etc),
  2. You do not actually have to spend the money, but you do need to know all the steps required to make it work.
  3. It needs to honor the TOS for any networks and services you use (no illegal network taps – legal hidden taps are fair game).
  4. You have to share your idea with the other players so we don’t have two people propose the same solution (first-come-first-serve on ideas).
Reporting

Be prepared to talk for about 5 minutes on how your solution would work.  Your talk needs to include:

  1. Estimated Monthly cost for the first year.
  2. Steps required to make it work.
  3. Known weaknesses.

If you have a super cheap hosting solution for Drupal 8 we’d love to hear about it.

Categories: FLOSS Project Planets

Test and Code: 83: PyBites Code Challenges behind the scenes - Bob Belderbos

Planet Python - Fri, 2019-08-16 03:00

Bob Belderbos and Julian Sequeira started PyBites a few years ago.
They started doing code challanges along with people around the world and writing about it.

Then came the codechalleng.es platform, where you can do code challenges in the browser and have your answer checked by pytest tests. But how does it all work?

Bob joins me today to go behind the scenes and share the tech stack running the PyBites Code Challenges platform.

We talk about the technology, the testing, and how it went from a cool idea to a working platform.

Special Guest: Bob Belderbos.

Sponsored By:

Support Test & Code - Python Testing & Development

Links:

<p>Bob Belderbos and Julian Sequeira started <a href="https://pybit.es/" rel="nofollow">PyBites</a> a few years ago.<br> They started doing code challanges along with people around the world and writing about it. </p> <p>Then came the <a href="https://codechalleng.es/" rel="nofollow">codechalleng.es</a> platform, where you can do code challenges in the browser and have your answer checked by pytest tests. But how does it all work?</p> <p>Bob joins me today to go behind the scenes and share the tech stack running the PyBites Code Challenges platform.</p> <p>We talk about the technology, the testing, and how it went from a cool idea to a working platform.</p><p>Special Guest: Bob Belderbos.</p><p>Sponsored By:</p><ul><li><a href="https://testandcode.com/pycharm" rel="nofollow">PyCharm Professional</a>: <a href="https://testandcode.com/pycharm" rel="nofollow">PyCharm is designed by programmers, for programmers, to provide all the tools you need for productive Python development.</a></li></ul><p><a href="https://www.patreon.com/testpodcast" rel="payment">Support Test & Code - Python Testing & Development</a></p><p>Links:</p><ul><li><a href="https://pybit.es/" title="PyBites" rel="nofollow">PyBites</a></li><li><a href="https://codechalleng.es/" title="PyBites Code Challenges coding platform" rel="nofollow">PyBites Code Challenges coding platform</a></li><li><a href="https://codechalleng.es/bites/paths" title="Learning Paths" rel="nofollow">Learning Paths</a></li><li><a href="https://pybit.es/whiteboard-interviews.html" title="Julian's article on whiteboard interviews" rel="nofollow">Julian's article on whiteboard interviews</a></li></ul>
Categories: FLOSS Project Planets

François Marier: Passwordless restricted guest account on Ubuntu

Planet Debian - Thu, 2019-08-15 23:10

Here's how I created a restricted but not ephemeral guest account on an Ubuntu 18.04 desktop computer that can be used without a password.

Create a user that can login without a password

First of all, I created a new user with a random password (using pwgen -s 64):

adduser guest

Then following these instructions, I created a new group and added the user to it:

addgroup nopasswdlogin adduser guest nopasswdlogin

In order to let that user login using GDM without a password, I added the following to the top of /etc/pam.d/gdm-password:

auth sufficient pam_succeed_if.so user ingroup nopasswdlogin

Note that this user is unable to ssh into this machine since it's not part of the sshuser group I have setup in my sshd configuration.

Privacy settings

In order to reduce the amount of digital traces left between guest sessions, I logged into the account using a GNOME session and then opened gnome-control-center. I set the following in the privacy section:

Then I replaced Firefox with Brave in the sidebar, set it as the default browser in gnome-control-center:

and configured it to clear everything on exit:

Create a password-less system keyring

In order to suppress prompts to unlock gnome-keyring, I opened seahorse and deleted the default keyring.

Then I started Brave, which prompted me to create a new keyring so that it can save the contents of its password manager securely. I set an empty password on that new keyring, since I'm not going to be using it.

I also made sure to disable saving of passwords, payment methods and addresses in the browser too.

Restrict user account further

Finally, taking an idea from this similar solution, I prevented the user from making any system-wide changes by putting the following in /etc/polkit-1/localauthority/50-local.d/10-guest-policy.pkla:

[guest-policy] Identity=unix-user:guest Action=* ResultAny=no ResultInactive=no ResultActive=no

If you know of any other restrictions that could be added, please leave a comment!

Categories: FLOSS Project Planets

Twisted Matrix Labs: Twisted 19.7.0 Released

Planet Python - Thu, 2019-08-15 19:38
On behalf of Twisted Matrix Laboratories and our long-suffering release manager Amber Brown, I am honored to announce1 the release of Twisted 19.7.0!

The highlights of this release include:
  • A full description on the PyPI page!  Check it out here: https://pypi.org/project/Twisted/19.7.0/ (and compare to the slightly sad previous version, here: https://pypi.org/project/Twisted/19.2.1/)
  • twisted.test.proto_helpers has been renamed to "twisted.internet.testing"
    • This removes the gross special-case carve-out where it was the only "public" API in a test module, and now the rule is that all test modules are private once again.
  • Conch's SSH server now supports hmac-sha2-512.
  • The XMPP server in Twisted Words will now validate certificates!
  • A nasty data-corruption bug in the IOCP reactor was fixed. If you're doing high-volume I/O on Windows you'll want to upgrade!
  • Twisted Web no longer gives clients a traceback by default, both when you instantiate Site and when you use twist web on the command line.  You can turn this behavior back on for local development with twist web --display-tracebacks.
  • Several bugfixes and documentation fixes resolving bytes/unicode type confusion in twisted.web.
  • Python 3.4 is no longer supported.
pip install -U twisted[tls] and enjoy all these enhancements today!
Thanks for using Twisted,
-glyph
1: somewhat belatedly: it came out 10 days ago.  Oops!
Categories: FLOSS Project Planets

Kdenlive 19.08 released

Planet KDE - Thu, 2019-08-15 18:05

After a well deserved summer break, the Kdenlive community is happy to announce the first major release after the code refactoring. This version comes with a big amount of fixes and nifty new features which will lay the groundwork for the 3 point editing system planned for this cycle. The Project Bin received improvements to the icon view mode and new features were added like the ability to seek while hovering over clips with the mouse cursor and now it is possible to add a whole folder hierarchy. On the usability front the a menu option was added to reset the Kdenlive config file and now you can search for effects from all tabs instead of only the selected tab. Head to our download page for AppImage and Windows packages.

 

Highlights

3 point editing with keyboard shortcuts

With 19.08.0 we added groundwork for full editing with keyboard shortcuts. This will speed up the edit work and you can do editing steps which are not possible or not as quick and easy with the mouse. Working with keyboard shortcuts in 19.08 is different as in the former Kdenlive versions. Mouse operations have not changed and working as before.

3 important points to understand the new concept:

Source (left image):
On the left of the track head the green vertical lines (V1 or A2). The green line is connected to the source clip in the project bin. Only when a clip is selected in the project bin the green line show up depending of the type of the clip (A/V clip, picture/title/color clip, audio clip).

Target (right image):
In the track head the target V1 or A1 is active when it’s yellow. An active target track react to edit operations like insert a clip even if the source is not active (see “Example of advanced edit” here).

The concept is like thinking of connectors:
Connect the source (the clip in the project bin) to a target (a track in the timeline). Only when both connectors on the same track are switched on the clip “flow” from the project bin to the timeline. Be aware: Active target tracks without connected source react on edit operations.

You can find a more detailed introduction in our Toolbox section here.

Adjust AV clips independently with Shift + resize to resize only audio or video part of a clip. Meta/Windows-key + Move in timeline allows to move the audio or video part to another track independently.

 

Press shift while hovering over clips in the Project Bin to seek through them.

 

Adjust the speed of a clip by pressing CTRL + dragging a clip in the timeline.

 

Now you can choose the number of channels and sample rates in the audio capture settings.

 

Other features

  • Added a parameter for steps that allows users to control the separation between keyframes generated by the motion tracker.
  • Re-enable transcode clip functionality.
  • Added a screen selection in the screen grab widget.
  • Add option to sort audio tracks in reverse order.
  • Default fade duration is now configurable from Kdenlive Settings > Misc.
  • Render dialog: add context menu to rendered jobs allowing to add rendered file as a project clip.
  • Renderwidget: Use max number of threads in render.
  • More UI components are translatable.

 

Full list of commits

  • Do not setToolTip() for the same tooltip twice. Commit.
  • Use translations for asset names in the Undo History. Commit.
  • Fix dropping clip in insert/overwrite mode. Commit.
  • Fix timeline drag in overwrite/edit mode. Commit.
  • Fix freeze deleting a group with clips on locked tracks. Commit.
  • Use the translated effect names for effect stack on the timeline. Commit.
  • Fix crash dragging clip in insert mode. Commit.
  • Use the translated transition names in the ‘Properties’ header. Commit.
  • Fix freeze and fade ins allowed to go past last frame. Commit.
  • Fix revert clip speed failing. Commit.
  • Fix revert speed clip reloading incorrectly. Commit.
  • Fix copy/paste of clip with negative speed. Commit.
  • Fix issues on clip reload: slideshow clips broken and title duration reset. Commit.
  • Fix slideshow effects disappearing. Commit.
  • Fix track effect keyframes. Commit.
  • Fix track effects don’t invalidate timeline preview. Commit.
  • Fix effect presets broken on comma locales, clear preset after resetting effect. Commit.
  • Fix crash in extract zone when no track is active. Commit.
  • Fix reverting clip speed modifies in/out. Commit.
  • Fix audio overlay showing up randomly. Commit.
  • Fix Find clip in bin not always scrolling to correct position. Commit.
  • Fix possible crash changing profile when cache job was running. Commit.
  • Fix editing bin clip does not invalidate timeline preview. Commit.
  • Fix audiobalance (MLT doesn’t handle start param as stated). Commit.
  • Fix target track inconsistencies:. Commit.
  • Make the strings in the settings dialog translatable. Commit.
  • Make effect names translatable in menus and in settings panel. Commit.
  • Remember last target track and restore when another clip is selected. Commit.
  • Dont’ process insert when no track active, don’t move cursor if no clip inserted. Commit.
  • Correctly place timeline toolbar after editing toolbars. Commit.
  • Lift/gamma/gain: make it possible to have finer adjustments with Shift modifier. Commit.
  • Fix MLT effects with float param and no xml description. Commit.
  • Cleanup timeline selection: rubber select works again when starting over a clip. Commit.
  • Attempt to fix Windows build. Commit.
  • Various fixes for icon view: Fix long name breaking layout, fix seeking and subclip zone marker. Commit.
  • Fix some bugs in handling of NVidia HWaccel for proxies and timeline preview. Commit.
  • Add 19.08 screenshot to appdata. Commit.
  • Fix bug preventing sequential names when making serveral script renderings from same project. Commit.
  • Fix compilation with cmake < 3.5. Commit.
  • Fix extract frame retrieving wrong frame when clip fps != project fps. Commit. Fixes bug #409927
  • Don’t attempt rendering an empty project. Commit.
  • Fix incorrect source frame size for transform effects. Commit.
  • Improve subclips visual info (display zone over thumbnail), minor cleanup. Commit.
  • Small cleanup of bin preview thumbnails job, automatically fetch 10 thumbs at insert to allow quick preview. Commit.
  • Fix project clips have incorrect length after changing project fps. Commit.
  • Fix inconsistent behavior of advanced timeline operations. Commit.
  • Fix “Find in timeline” option in bin context menu. Commit.
  • Support the new logging category directory with KF 5.59+. Commit.
  • Update active track description. Commit.
  • Use extracted translations to translate asset descriptions. Commit.
  • Fix minor typo. Commit.
  • Make the file filters to be translatable. Commit.
  • Extract messages from transformation XMLs as well. Commit.
  • Don’t attempt to create hover preview for non AV clips. Commit.
  • Add Cache job for bin clip preview. Commit.
  • Preliminary implementation of Bin clip hover seeking (using shift+hover). Commit.
  • Translate assets names. Commit.
  • Some improvments to timeline tooltips. Commit.
  • Reintroduce extract clip zone to cut a clip whithout re-encoding. Commit. See bug #408402
  • Fix typo. Commit.
  • Add basic collision check to speed resize. Commit.
  • Bump MLT dependency to 6.16 for 19.08. Commit.
  • Exit grab mode with Escape key. Commit.
  • Improve main item when grabbing. Commit.
  • Minor improvement to clip grabbing. Commit.
  • Fix incorrect development version. Commit.
  • Make all clips in selection show grab status. Commit.
  • Fix “QFSFileEngine::open: No file name specified” warning. Commit.
  • Don’t initialize a separate Factory on first start. Commit.
  • Set name for track menu button in timeline toolbar. Commit.
  • Pressing Shift while moving an AV clip allows to move video part track independently of audio part. Commit.
  • Ensure audio encoding do not export video. Commit.
  • Add option to sort audio tracks in reverse order. Commit.
  • Warn and try fixing clips that are in timeline but not in bin. Commit.
  • Try to recover a clip if it’s parent id cannot be found in the project bin (use url). Commit. See bug #403867
  • Fix tests. Commit.
  • Default fade duration is now configurable from Kdenlive Settings > Misc. Commit.
  • Minor update for AppImage dependencies. Commit.
  • Change speed clip job: fix overwrite and UI. Commit.
  • Readd proper renaming for change speed clip jobs. Commit.
  • Add whole hierarchy when adding folder. Commit.
  • Fix subclip cannot be renamed. Store them in json and bump document version. Commit.
  • Added audio capture channel & sample rate configuration. Commit.
  • Add screen selection in screen grab widget. Commit.
  • Initial implementation of clip speed change on Ctrl + resize. Commit.
  • Fix FreeBSD compilation. Commit.
  • Render dialog: add context menu to rendered jobs allowing to add rendered file as a project clip. Commit.
  • Ensure automatic compositions are compositing with correct track on project opening. Commit.
  • Fix minor typo. Commit.
  • Add menu option to reset the Kdenlive config file. Commit.
  • Motion tracker: add steps parameter. Patch by Balazs Durakovacs. Commit.
  • Try to make binary-factory mingw happy. Commit.
  • Remove dead code. Commit.
  • Add some missing bits in Appimage build (breeze) and fix some plugins paths. Commit.
  • AppImage: disable OpenCV freetype module. Commit.
  • Docs: Unbreak menus. Commit.
  • Sync Quick Start manual with UserBase. Commit.
  • Fix transcoding crashes caused by old code. Commit.
  • Reenable trancode clip functionality. Commit.
  • Fix broken fadeout. Commit.
  • Small collection of minor improvements. Commit.
  • Search effects from all tabs instead of only the selected tab. Commit.
  • Check whether first project clip matches selected profile by default. Commit.
  • Improve marker tests, add abort testing feature. Commit.
  • Revert “Trying to submit changes through HTTPS”. Commit.
  • AppImafe: define EXT_BUILD_DIR for Opencv contrib. Commit.
  • Fix OpenCV build. Commit.
  • AppImage update: do not build MLT inside dependencies so we can have more frequent updates. Commit.
  • If a timeline operation touches a group and a clip in this group is on a track that should not be affected, break the group. Commit.
  • Add tests for unlimited clips resize. Commit.
  • Small fix in tests. Commit.
  • Renderwidget: Use max number of threads in render. Commit.
  • Don’t allow resizing while dragging. Fixes #134. Commit.
  • Revert “Revert “Merge branch ‘1904’””. Commit.
  • Revert “Merge branch ‘1904’”. Commit.
  • Update master appdata version. Commit.
Categories: FLOSS Project Planets

Agaric Collective: Introduction to paragraphs migrations in Drupal

Planet Drupal - Thu, 2019-08-15 17:52

Today we will present an introduction to paragraphs migrations in Drupal. The example consists of migrating paragraphs of one type, then connecting the migrated paragraphs to nodes. A separate image migration is included to demonstrate how they are different. At the end, we will talk about behavior that deletes paragraphs when the host entity is deleted. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD paragraphs migration introduction whose machine name is ud_migrations_paragraph_intro. It comes with three migrations: ud_migrations_paragraph_intro_paragraph, ud_migrations_paragraph_intro_image, and ud_migrations_paragraph_intro_node. One content type, one paragraph type, and four fields will be created when the module is installed.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type, the paragraph type, and the fields are automatically created and deleted.

You can get the Paragraph module is using composer: composer require drupal/paragraphs. This will also download its dependency: the Entity Reference Revisions module. If your Drupal site is not composer-based, you can get the code for both modules manually.

Understanding the example set up

The example code creates one paragraph type named UD book paragraph (ud_book_paragraph). It has two “Text (plain)” fields: Title (field_ud_book_paragraph_title) and Author (field_ud_book_paragraph_author). A new UD Paragraphs (ud_paragraphs) content type is also created. This has two fields: Image (field_ud_image) and Favorite book (field_ud_favorite_book) containing references to images and book paragraphs imported in separate migrations. The words in parenthesis represent the machine names of the different elements.

The paragraph migration

Migrating into a paragraph type is very similar to migrating into a content type. You specify the source, process the fields making any required transformation, and set the destination entity and bundle. The following code snippet shows the source, process, and destination sections:

source: plugin: embedded_data data_rows: - book_id: 'B10' book_title: 'The definite guide to Drupal 7' book_author: 'Benjamin Melançon et al.' - book_id: 'B20' book_title: 'Understanding Drupal Views' book_author: 'Carlos Dinarte' - book_id: 'B30' book_title: 'Understanding Drupal Migrations' book_author: 'Mauricio Dinarte' ids: book_id: type: string process: field_ud_book_paragraph_title: book_title field_ud_book_paragraph_author: book_author destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: ud_book_paragraph

The most important part of a paragraph migration is setting the destination plugin to entity_reference_revisions:paragraph. This plugin is actually provided by the Entity Reference Revisions module. It is very important to note that paragraphs entities are revisioned. This means that when you want to create a reference to them, you need to provide two IDs: target_id and target_revision_id. Regular entity reference fields like files, images, and taxonomy terms only require the target_id. This will be further explained with the node migration.

The other configuration that you can optionally set in the destination section is default_bundle. The value will be the machine name of the paragraph type you are migrating into. You can do this when all the paragraphs for a particular migration definition file will be of the same type. If that is not the case, you can leave out the default_bundle configuration and add a mapping for the type entity property in the process section.

You can execute the paragraph migration with this command: drush migrate:import
ud_migrations_paragraph_intro_paragraph. After running the migration, there is not much you can do to verify that it worked. Contrary to other entities, there is no user interface, available out of the box, that lists all paragraphs in the system. One way to verify if the migration worked is to manually create a View that shows paragraphs. Another way is to query the database directly. You can inspect the tables that store the paragraph fields’ data. In this example, the tables would be:

  • paragraph__field_ud_book_paragraph_author for the current author.
  • paragraph__field_ud_book_paragraph_title for the current title.
  • paragraph_r__8c3a9563ac for all the author revisions.
  • paragraph_r__3fa7e9863a for all the title revisions.

Each of those tables contains information about the bundle (paragraph type), the entity id, the revision id, and the migrated field value. Table names are derived from the machine names of the fields. If they are too long, the field name will be hashed to produce a shorter table name. Having to query the database is not ideal. Unfortunately, the options available to check if a paragraph migration worked are limited at the moment.

The node migration

The node migration will serve as the host for both referenced entities: images and paragraphs. The image migration is very similar to the one explained in a previous article. This time, the focus will be the paragraph migration. Both of them are set as dependencies of the node migration, so they need to be executed in advance. The following snippet shows how the source, destinations, and dependencies are set:

source: plugin: embedded_data data_rows: - unique_id: 1 name: 'Michele Metts' photo_file: 'P01' book_ref: 'B10' - unique_id: 2 name: 'Benjamin Melançon' photo_file: 'P02' book_ref: 'B20' - unique_id: 3 name: 'Stefan Freudenberg' photo_file: 'P03' book_ref: 'B30' ids: unique_id: type: integer destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - ud_migrations_paragraph_intro_image - ud_migrations_paragraph_intro_paragraph optional: []

Note that photo_file and book_ref both contain the unique identifier of records in the image and paragraph migrations, respectively. These can be used with the migration_lookup plugin to map the reference fields in the nodes to be migrated. ud_paragraphs is the machine name of the target content type.

The mapping of the image reference field follows the same pattern than the one explained in the article on migration dependencies. Using the migration_lookup plugin, you indicate which is the migration that should be searched for the images. You also specify which source column contains the unique identifiers that match those in the image migration. This operation will return a single value: the file ID (fid) of the image. This value can be assigned to the target_id subfield of field_ud_image to establish the relationship. The following code snippet shows how to do it:

field_ud_image/target_id: plugin: migration_lookup migration: ud_migrations_paragraph_intro_image source: photo_fileParagraph field mappings

Before diving into the paragraph field mapping, let’s think about what needs to be done. Paragraphs are revisioned entities. To make a reference to them, you need two IDs: their entity id and their entity revision id. These two values need to be assigned to two subfields of the paragraph reference field: target_id and target_revision_id respectively. You have to come up with a process pipeline that complies with this requirement. There are many ways to do it, and the specifics will depend on your field configuration. In this example, the paragraph reference field allows an unlimited number of paragraphs to be associated, but only of one type: ud_book_paragraph. Another thing to note is that even though the field allows you to add as many paragraphs as you want, the example migrates exactly one paragraph.

With those considerations in mind, the mapping of the paragraph field will be a two step process. First, use the migration_lookup plugin to get a reference to the paragraph. Second, use the fetched values to set the paragraph reference subfields. The following code snippet shows how to do it:

pseudo_mbe_book_paragraph: plugin: migration_lookup migration: ud_migrations_paragraph_intro_paragraph source: book_ref field_ud_favorite_book: plugin: sub_process source: - '@pseudo_mbe_book_paragraph' process: target_id: '0' target_revision_id: '1'

The first step is a normal migration_lookup procedure. The important difference is that instead of getting a single value, like with images, the paragraph lookup operation will return an array of two values. The format is like [3, 7] where the 3 represents the entity id and the 7 represents the entity revision id of the paragraph. Note that the array keys are not named. To access those values, you would use the index of the elements starting with zero (0). This will be important later. The returned array is stored in the pseudo_mbe_book_paragraph pseudofield.

The second step is to set the target_id and target_revision_id subfields. In this example, field_ud_favorite_book is the machine name paragraph reference field. Remember that it is configured to accept an arbitrary number of paragraphs, and each will require passing an array of two elements. This means you need to process an array of arrays. To do that, you use the sub_process plugin to iterate over an array of paragraph references. In this example, the structure to iterate over would be like this:

[ [3, 7] ]

Let’s dissect how to do the mapping of the paragraph reference field. The source configuration of the sub_process plugin contains an array of paragraph references. In the example, that array has a single element: the '@pseudo_mbe_book_paragraph' pseudofield. The quotes (') and at sign (@) are required to reuse an element that appears before in the process pipeline. Then, in the process configuration, you set the subfields for the paragraph reference field. It is worth noting that at this point you are iterating over a list of paragraph references, even if that list contains only one element. If you had more than one paragraph to migrate, whatever you defined in process will apply to all of them.

The process configuration is an array of subfield mappings. The left side of the assignment is the name of the subfield you want to set. The right side of the assignment is an array index of the paragraph reference being processed. Remember that this array does not have named-keys, so you use their numerical index to refer to them. The example sets the target_id subfield to the element in the 0 index and the target_revision_id subfield to the element in the one 1 index. Using the example data, this would be target_id: 3 and target_revision_id: 7. The quotes around the numerical indexes are important. If not used, the migration will not find the indexes and the paragraphs will not be associated. The end result of this operation will be something like this:

'field_ud_favorite_book' => array (1) [ array (2) [ 'target_id' => string (1) "3" 'target_revision_id' => string (1) "7" ] ]

There are three ways to run the migrations: manually, executing dependencies, and using tags. The following code snippet shows the three options:

# 1) Manually. $ drush migrate:import ud_migrations_paragraph_intro_image $ drush migrate:import ud_migrations_paragraph_intro_paragraph $ drush migrate:import ud_migrations_paragraph_intro_node # 2) Executing depenpencies. $ drush migrate:import ud_migrations_paragraph_intro_node --execute-dependencies # 3) Using tags. $ drush migrate:import --tag='UD Paragraphs Intro'

And that is one way to map paragraph reference fields. In the end, all you have to do is set the target_id and target_revision_id subfields. The process pipeline that gets you to that point can vary depending on how your paragraphs are configured. The following is a non-exhaustive list of things to consider when migrating paragraphs:

  • How many paragraphs types can be referenced?
  • How many paragraphs instances are being migrated? Is this a multivalue field?
  • Do paragraphs have translations?
  • Do paragraphs have revisions?
Do migrated paragraphs disappear upon node rollback?

Paragraphs migrations are affected by a particular behavior of revisioned entities. If the host entity is deleted, and the paragraphs do not have translations, the whole paragraph gets deleted. That means that deleting a node will make the referenced paragraphs’ data to be removed. How does this affect your migration workflow? If the migration of the host entity is rollback, then the paragraphs will be removed, the migrate API will not know about it. In this example, if you run a migrate status command after rolling back the node migration, you will see that the paragraph migration indicated that there are no pending elements to process. The file migration for the images will report the same, but in that case, the images will remain on the system.

In any migration project, it is common that you do rollback operations to test new field mappings or fix errors. Thus, chances are very high that you will stumble upon this behavior. Thanks to Damien McKenna for helping me understand this behavior and tracking it to the rollback() method of the EntityReferenceRevisions destination plugin. So, what do you do to recover the deleted paragraphs? You have to rollback both migrations: node and paragraph. And then, you have to import the two again. The following snippet shows how to do it:

# 1) Rollback both migrations. $ drush migrate:rollback ud_migrations_paragraph_intro_node $ drush migrate:rollback ud_migrations_paragraph_intro_paragraph # 2) Import both migrations againg. $ drush migrate:import ud_migrations_paragraph_intro_paragraph $ drush migrate:import ud_migrations_paragraph_intro_node

What did you learn in today’s blog post? Have you migrated paragraphs before? If so, what challenges have you found? Did you know paragraph reference fields require two subfields to be set? Did you that deleting the host entity also deletes referenced paragraphs? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories: FLOSS Project Planets

Cantor 19.08

Planet KDE - Thu, 2019-08-15 17:12

Since the last year the development in Cantor is keeping quite a good momentum. After many new features and stabilization work done in the 18.12 release, see this blog post for an overview, we continued to work on improving the application in 19.04. Today the release of KDE Applications 19.08, and with this of Cantor 19.08, was announced. Also in this release we concentrated mostly on improving the usability of Cantor and stabilizing the application. See the ChangeLog file for the full list of changes.

For new features targeting at the usability we want to mention the improved handling of the “backends”. As you know, Cantor serves as the front end to different open-source computer algebra systems and programming languages and requires these backends for the actual computation. The communication with the backends is handled via different plugins that are installed and loaded on demand. In the past, in case a plugin for a specific backend failed to initialize (e.g. because of the backend executable not found, etc.), we didn’t show it in the “Choose a Backend” dialog and the user was completely lost. Now we still don’t allow to create a worksheet for this backend, but we show the entry in the dialog together with a message about why the plugin is disabled. Like as in the example below to check the executable path in the settings:



Similar for cases where the plugin, as compiled and provided by the Linux distribution, doesn’t match to the version of the backend that the user installed manually on the system and asked Cantor to use. Here we clearly inform the user about the version mismatch and also advise what to do:


Having mentioned custom installations of backends and Julia above, in 19.08 we allow to set the custom path to the Julia interpreter similarly to how it is already possible for some other backends.

The handling of Markdown and LaTeX entries became more comfortable. We allow now to quickly switch from the rendered result to the original code via the mouse double click. Switching and back, as usual, via the evaluation of the entry. Furthermore, the results of such rendered Markdown and LaTeX entries are saved now as part of the project. This allows to consume projects with such Markdown and LaTeX entries also on systems having no support for Markdown and LaTeX rendering process. This also decreases the loading times of projects since the ready-to-use results can be directly used.

In 19.08 we added the “Recent Files” menu allowing for a quick access to the recently opened projects:


Among important bug fixes we want to mention fixes that improved the communication with the external processes “hosting” embedded interpreters like Python and Julia. Cantor reacts now much better on errors and crashes produced in those external processes. For Python the interruption of running commands was improved.

While working on 19.08 to make the application more usable and stable we also worked on some bigger new features in parallel. This development is being done as part of Google Summer of Code project and the goal of this project is to add the support of Jupyter notebooks in Cantor. The idea behind this project and its progress are covered in a series of blogs (here, here and here). The code is in a quite good shape and was merged to master already. This is how such a Jupyter notebook looks like in Cantor:

We plan to release this in the upcoming release of KDE Applications 19.12. Stay tuned!

Categories: FLOSS Project Planets

Hook 42: Dad Jokes, Development, and Drupal in Denver

Planet Drupal - Thu, 2019-08-15 14:47
Dad Jokes, Development, and Drupal in Denver Aimee Degnan Thu, 08/15/2019 - 18:47
Categories: FLOSS Project Planets

Python Insider: Inspect PyPI event logs to audit your account's and project's security

Planet Python - Thu, 2019-08-15 13:45
To help you check for security problems, PyPI is adding an advanced audit log of user actions beyond the current (existing) journal. This will, for instance, allow publishers to track all actions taken by third party services on their behalf.

This beta feature is live now on PyPI and on Test PyPI.

Background:
We're further increasing the security of the Python Package Index with another new beta feature: an audit log of sensitive actions that affect users and projects. This is thanks to a grant from the Open Technology Fund, coordinated by the Packaging Working Group of the Python Software Foundation.

Details:
Project security history display, listing
events (such as "file removed from release version 1.0.1")
with user, date/time, and IP address for each event. We're adding a display so you can look at things that have happened in your user account or project, and check for signs someone's stolen your credentials.

In your account settings, you can view a log of sensitive actions from the last two weeks that are relevant to your user account, and if you are an Owner at least one project on PyPI, you can go to that project's Manage Project page to view a log of sensitive actions (performed by any user) relevant to that project. (And PyPI site administrators are able to view the full audit log for all users and all projects.)

Please help us test this, and report issues.

User security history display, listing
events (such as "API token added")
with additional details (such as token scope), date/time,
and IP address for each event. In beta:
We're still refining this and may fail to log, or to properly display, events in the audit log. And the sensitive event logging and display starting on 16 August 2019, so you won't see sensitive events from before that date. (Read more technical details about implementation in the GitHub issue.)

Next:
We're continuing to refine all our beta features, while working on accessibility improvements and starting to work on localization on PyPI. Follow our progress reports in more detail on Discourse.
Categories: FLOSS Project Planets

Community Working Group posts: Announcing Drupal Event Code of Conduct Training

Planet Drupal - Thu, 2019-08-15 13:25

The Drupal Community Working Group is happy to announce that we've teamed up with Otter Tech to offer live, monthly, online Code of Conduct enforcement training for Drupal Event organizers and volunteers through the end of 2019. 

The training is designed to provide "first responder" skills to Drupal community members who take reports of potential Code of Conduct issues at Drupal events, including meetups, camps, conventions, and other gatherings. The workshops will be attended by Code of Conduct enforcement teams from other open source events, which will allow cross-pollination of knowledge with the Drupal community.

Each monthly online workshop is the same; community members only have to attend one monthly workshop of their choice to complete the training.  We strongly encourage all Drupal event organizers to consider sponsoring one or two persons' attendance at this workshop.

The monthly online workshops will be presented by Sage Sharp, Otter Tech's CEO and a diversity and inclusion leader in the open source community. From the official description of the workshop, it will include:

  • Practice taking a report of a potential Code of Conduct violation (an incident report)
  • Practice following up with the reported person
  • Instructor modeling on how to take a report and follow up on a report
  • One practice scenario for a report given at an event
  • One practice scenario for a report given in an online community
  • Discussion on bias, microaggressions, personal conflicts, and false reporting
  • Frameworks for evaluating a response to a report
  • 40 minutes total of Q&A time

In addition, we have received a Drupal Community Cultivation Grant to help defray the cost of the workshop for those that need assistance. The standard cost of the workshop is $350, Otter Tech has worked with us to allow us to provide the workshop for $300. To register for the workshop, first let us know that you're interested by completing this sign-up form - everyone who completes the form will receive a coupon code for $50 off the regular price of the workshop.

For those that require additional assistance, we have a limited number of $100 subsidies available, bringing the workshop price down to $200. Subsidies will be provided based on reported need as well as our goal to make this training opportunity available to all corners of our community. To apply for the subsidy, complete the relevant section on the sign-up form. The deadline for applying for the subsidy is end-of-business on Friday, September 6, 2019 - those selected for the subsidy will be notified after this date (in time for the September 9, 2019 workshop).

The workshops will be held on:

  • September 9 (Monday) at 3 pm to 7 pm U.S. Pacific Time / 8 am to 12 pm Australia Eastern Time
  • October 23 (Wednesday) at 5 am to 9 am U.S. Pacific Time / 2 pm to 6 pm Central European Time
  • November 14 (Thursday) at 6 pm to 10 pm U.S. Pacific Time / 1 pm to 5 pm Australia Eastern Time
  • December 4 (Wednesday) at 9 am to 1 pm U.S. Pacific Time / 6 pm to 10 pm Central European Time

Those that successfully complete the training will be (at their discretion) listed on Drupal.org (in the Drupal Community Workgroup section) as a means to prove that they have completed the training. We feel that moving forward, the Drupal community now has the opportunity to have professionally trained Code of Conduct contacts at the vast majority of our events, once again, leading the way in the open source community.

We are fully aware that the fact that the workshops will be presented in English limit who will be able to attend. We are more than interested in finding additional professional Code of Conduct workshops in other languages. Please contact us if you can assist.
 

Categories: FLOSS Project Planets

InternetDevels: Great examples of high tech company websites built on Drupal

Planet Drupal - Thu, 2019-08-15 11:42

Drupal is moving to the future and adopts more and more innovative trends. No wonder that high tech engineering leaders trust Drupal and build their sites with it.

Drupal in high-tech: innovative companies + innovative CMS

They have found each other! Thinking about Drupal’s innovative spirit, we want to mention plenty of its capabilities, so here are at least a few:

Read more
Categories: FLOSS Project Planets

KTouch in KDE Apps 19.08.0

Planet KDE - Thu, 2019-08-15 10:34

KTouch, an application to learn and practice touch typing, has received a considerable update with today's release of KDE Apps 19.8.0. It includes a complete redesign by me for the home screen, which is responsible to select the lesson to train on.

New home screen of KTouch

There is now a new sidebar offering all the courses KTouch has for a total of 34 different keyboard layouts. In previous versions, KTouch presented only the courses matching the current keyboard layout. Now it is much more obvious how to train on different keyboard layouts than the current one.

Other improvements in this release include:

  • Tab focus works now as expected throughout the application and allows training without touching the mouse ever.
  • Access to training statistics for individual lessons from the home screen has been added.
  • KTouch supports now rendering on HiDPI screens.

KTouch 19.08.0 is available on the Snap Store and is coming to your Linux distribution.

Categories: FLOSS Project Planets

Eelke Blok: Fighting content spam on Drupal with Akismet

Planet Drupal - Thu, 2019-08-15 10:15

Once, the Drupal community had Mollom, and everything was good. It was a web service that would let you use an API to scan comments and other user-submitted content and it would let your site know whether it thought it was spam, or not, so it could safely publish the content. Or not. It was created by our very own Dries Buytaert and obviously had a Drupal module. It was the service of choice for Drupal sites struggling with comment spam. Unfortunately, Mollom no longer exists. But there is an alternative, from the WordPress world: Akismet.

Categories: FLOSS Project Planets

Julian Andres Klode: APT Patterns

Planet Debian - Thu, 2019-08-15 09:55

If you have ever used aptitude a bit more extensively on the command-line, you’ll probably have come across its patterns. This week I spent some time implementing (some) patterns for apt, so you do not need aptitude for that, and I want to let you in on the details of this merge request !74.

so, what are patterns?

Patterns allow you to specify complex search queries to select the packages you want to install/show. For example, the pattern ?garbage can be used to find all packages that have been automatically installed but are no longer depended upon by manually installed packages. Or the pattern ?automatic allows you find all automatically installed packages.

You can combine patterns into more complex ones; for example, ?and(?automatic,?obsolete) matches all automatically installed packages that do not exist any longer in a repository.

There are also explicit targets, so you can perform queries like ?for x: ?depends(?recommends(x)): Find all packages x that depend on another package that recommends x. I do not fully comprehend those yet - I did not manage to create a pattern that matches all manually installed packages that a meta-package depends upon. I am not sure it is possible.

reducing pattern syntax

aptitude’s syntax for patterns is quite context-sensitive. If you have a pattern ?foo(?bar) it can have two possible meanings:

  1. If ?foo takes arguments (like ?depends did), then ?bar is the argument.
  2. Otherwise, ?foo(?bar) is equivalent to ?foo?bar which is short for ?and(?foo,?bar)

I find that very confusing. So, when looking at implementing patterns in APT, I went for a different approach. I first parse the pattern into a generic parse tree, without knowing anything about the semantics, and then I convert the parse tree into a APT::CacheFilter::Matcher, an object that can match against packages.

This is useful, because the syntactic structure of the pattern can be seen, without having to know which patterns have arguments and which do not - basically, for the parser ?foo and ?foo() are the same thing. That said, the second pass knows whether a pattern accepts arguments or not and insists on you adding them if required and not having them if it does not accept any, to prevent you from confusing yourself.

aptitude also supports shortcuts. For example, you could write ~c instead of config-files, or ~m for automatic; then combine them like ~m~c instead of using ?and. I have not implemented these short patterns for now, focusing instead on getting the basic functionality working.

So in our example ?foo(?bar) above, we can immediately dismiss parsing that as ?foo?bar:

  1. we do not support concatenation instead of ?and.
  2. we automatically parse ( as the argument list, no matter whether ?foo supports arguments or not

apt not understanding invalid patterns

Supported syntax

At the moment, APT supports two kinds of patterns: Basic logic ones like ?and, and patterns that apply to an entire package as opposed to a specific version. This was done as a starting point for the merge, patterns for versions will come in the next round.

We also do not have any support for explicit search targets such as ?for x: ... yet - as explained, I do not yet fully understand them, and hence do not want to commit on them.

The full list of the first round of patterns is below, helpfully converted from the apt-patterns(7) docbook to markdown by pandoc.

logic patterns

These patterns provide the basic means to combine other patterns into more complex expressions, as well as ?true and ?false patterns.

?and(PATTERN, PATTERN, ...)

Selects objects where all specified patterns match.

?false

Selects nothing.

?not(PATTERN)

Selects objects where PATTERN does not match.

?or(PATTERN, PATTERN, ...)

Selects objects where at least one of the specified patterns match.

?true

Selects all objects.

package patterns

These patterns select specific packages.

?architecture(WILDCARD)

Selects packages matching the specified architecture, which may contain wildcards using any.

?automatic

Selects packages that were installed automatically.

?broken

Selects packages that have broken dependencies.

?config-files

Selects packages that are not fully installed, but have solely residual configuration files left.

?essential

Selects packages that have Essential: yes set in their control file.

?exact-name(NAME)

Selects packages with the exact specified name.

?garbage

Selects packages that can be removed automatically.

?installed

Selects packages that are currently installed.

?name(REGEX)

Selects packages where the name matches the given regular expression.

?obsolete

Selects packages that no longer exist in repositories.

?upgradable

Selects packages that can be upgraded (have a newer candidate).

?virtual

Selects all virtual packages; that is packages without a version. These exist when they are referenced somewhere in the archive, for example because something depends on that name.

examples
apt remove ?garbage

Remove all packages that are automatically installed and no longer needed - same as apt autoremove

apt purge ?config-files

Purge all packages that only have configuration files left

oddities

Some things are not yet where I want them:

  • ?architecture does not support all, native, or same
  • ?installed should match only the installed version of the package, not the entire package (that is what aptitude does, and it’s a bit surprising that ?installed implies a version and ?upgradable does not)
the future

Of course, I do want to add support for the missing version patterns and explicit search patterns. I might even add support for some of the short patterns, but no promises. Some of those explicit search patterns might have slightly different syntax, e.g. ?for(x, y) instead of ?for x: y in order to make the language more uniform and easier to parse.

Another thing I want to do ASAP is to disable fallback to regular expressions when specifying package names on the command-line: apt install g++ should always look for a package called g++, and not for any package containing g (g++ being a valid regex) when there is no g++ package. I think continuing to allow regular expressions if they start with ^ or end with $ is fine - that prevents any overlap with package names, and would avoid breaking most stuff.

There also is the fallback to fnmatch(): Currently, if apt cannot find a package with the specified name using the exact name or the regex, it would fall back to interpreting the argument as a glob(7) pattern. For example, apt install apt* would fallback to installing every package starting with apt if there is no package matching that as a regular expression. We can actually keep those in place, as the glob(7) syntax does not overlap with valid package names.

Maybe I should allow using [] instead of () so larger patterns become more readable, and/or some support for comments.

There are also plans for AppStream based patterns. This would allow you to use apt install ?provides-mimetype(text/xml) or apt install ?provides-lib(libfoo.so.2). It’s not entirely clear how to package this though, we probably don’t want to have libapt-pkg depend directly on libappstream.

feedback

Talk to me on IRC, comment on the Mastodon thread, or send me an email if there’s anything you think I’m missing or should be looking at.

Categories: FLOSS Project Planets

Continuum Analytics Blog: 4 Machine Learning Use Cases in the Automotive Sector

Planet Python - Thu, 2019-08-15 09:54

From parts suppliers to vehicle manufacturers, service providers to rental car companies, the automotive and related mobility industries stand to gain significantly from implementing machine learning at scale. We see the big automakers investing in…

The post 4 Machine Learning Use Cases in the Automotive Sector appeared first on Anaconda.

Categories: FLOSS Project Planets

Kushal Das: git checkout to previous branch

Planet Python - Thu, 2019-08-15 07:26

We regularly move between git branches while working on projects. I always used to type in the full branch name, say to go back to develop branch and then come back to the feature branch. This generally takes a lot of typing (for the branch names etc.). I found out that we can use - like in the way we use cd - to go back to the previous directory we were in.

git checkout -

Here is a small video for demonstration.

I hope this will be useful for some people.

Categories: FLOSS Project Planets

Pages