FLOSS Project Planets

TEN7 Blog's Drupal Posts: Case Study: Our Family Wizard

Planet Drupal - Tue, 2019-08-13 13:30
Website Redesign: A Professional Face for a Market Leader

When parents divorce or separate, the child often becomes the unwilling intermediary for communication. Our Family Wizard (OFW) is a mobile application for co-parenting exes that facilitates and tracks communication, helps coordinate child duties and stores important information.

Categories: FLOSS Project Planets

Drupal blog: Increasing Drupal speaker diversity

Planet Drupal - Tue, 2019-08-13 13:23

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

At Drupalcon Seattle, I spoke about some of the challenges Open Source communities like Drupal often have with increasing contributor diversity. We want our contributor base to look like everyone in the world who uses Drupal's technology on the internet, and unfortunately, that is not quite the reality today.

One way to step up is to help more people from underrepresented groups speak at Drupal conferences and workshops. Seeing and hearing from a more diverse group of people can inspire new contributors from all races, ethnicities, gender identities, geographies, religious groups, and more.

To help with this effort, the Drupal Diversity and Inclusion group is hosting a speaker diversity training workshop on September 21 and 28 with Jill Binder, whose expertise has also driven major speaker diversity improvements within the WordPress community.

I'd encourage you to either sign up for this session yourself or send the information to someone in a marginalized group who has knowledge to share, but may be hesitant to speak up. Helping someone see that their expertise is valuable is the kind of support we need in order to drive meaningful change.

Categories: FLOSS Project Planets

Mike Driscoll: An Intro to Flake8

Planet Python - Tue, 2019-08-13 13:15

Python has several linters that you can use to help you find errors in your code or warnings about style. For example, one of the most thorough linters is Pylint.

Flake8 is described as a tool for style guide enforcement. It is also a wrapper around PyFlakes, pycodestyle and Ned Batchelder’s McCabe script. You can use Flake8 as a way to lint your code and enforce PEP8 compliance.


Installing Flake8 is pretty easy when you use pip. If you’d like to install flake8 to your default Python location, you can do so with the following command:

python -m pip install flake8

Now that Flake8 is installed, let’s learn how to use it!

Getting Started

The next step is to use Flake8 on your code base. Let’s write up a small piece of code to run Flake8 against.

Put the following code into a file named hello.py:

from tkinter import *   class Hello: def __init__(self, parent): self.master = parent parent.title("Hello Tkinter")   self.label = Label(parent, text="Hello there") self.label.pack()   self.close_button = Button(parent, text="Close", command=parent.quit) self.close_button.pack()   def greet(self): print("Greetings!")   root = Tk() my_gui = Hello(root) root.mainloop()

Here you write a small GUI application using Python’s tkinter library. A lot of tkinter code on the internet uses the from tkinter import * pattern, which is something that you should normally avoid. The reason being that you don’t know everything you are importing and you could accidentally overwrite an import with your own variable name.

Let’s run Flake8 against this code example.

Open up your terminal and run the following command:

flake8 hello.py

You should see the following output:

tkkkk.py:1:1: F403 'from tkinter import *' used; unable to detect undefined names
tkkkk.py:3:1: E302 expected 2 blank lines, found 1
tkkkk.py:8:22: F405 'Label' may be undefined, or defined from star imports: tkinter
tkkkk.py:11:29: F405 'Button' may be undefined, or defined from star imports: tkinter
tkkkk.py:18:1: E305 expected 2 blank lines after class or function definition, found 1
tkkkk.py:18:8: F405 'Tk' may be undefined, or defined from star imports: tkinter
tkkkk.py:20:16: W292 no newline at end of file

The items that start with “F” are PyFlake error codes and point out potential errors in your code. The other errors are from pycodestyle. You should check out those two links to see the full error code listings and what they mean.

You can also run Flake8 against a directory of files rather than one file at a time.

If you want to limit what types of errors you want to catch, you can do something like the following:

flake8 --select E123 my_project_dir

This will only show the E123 error for any files that have them in the specified directory and ignore all other types of errors.

For a full listing of the command line arguments you can use with Flake8, check out their documentation.

Finally, Flake8 allows you to change its configuration. For example, your company might only adhere to parts of PEP8, so you don’t want Flake8 to flag things that your company doesn’t care about. Flake8 also supports using plugins.

Wrapping Up

Flake8 might be just the tool for you to use to help keep your code clean and free of errors. If you use a continuous integration system, like TravisCI or Jenkins, you can combine Flake8 with Black to automatically format your code and flag errors.

This is definitely a tool worth checking out and it’s a lot less noisy than PyLint. Give it a try and see what you think!

Related Reading

The post An Intro to Flake8 appeared first on The Mouse Vs. The Python.

Categories: FLOSS Project Planets

Krita Sprint 2019

Planet KDE - Tue, 2019-08-13 13:12

So, we had a Krita sprint last week, a gathering of contributors of Krita. I’ve been at all sprints since 2015, which was roughly the year I became a Krita contributor. This is in part because I don’t have to go abroad, but also because I tend to do a lot of administrative side things.

This sprint was interesting in that it was an attempt to have more if not as much artists as developers there. The idea being that the previous sprint was very much focused on bugfixing and getting new contributors familiar with the code base(we fixed 40 bugs back then), this sprint would be more about investigating workflow issues, figuring out future goals, and general non-technical things like how to help people, how to engage people, how to make people feel part of the community.

Unfortunately, it seems I am not really built for sprints. I was already somewhat tired when I arrived, and was eventually only able to do half days most of the time because there were just too many people …

So, what did I do this sprint?

Investigate LibSai (Tuesday)

So, PaintTool Sai is a 2d painting program with a simple interface that was the hottest thing around 2006 or so, because at the time, you had PaintShop Pro(a good image editing program, but otherwise…), Photoshop CS2(You could paint with this but it was rather clunky), GIMP, OpenCanvas(very weird interface) and a bunch of natural media simulating programs (Corel, very buggy, and some others I don’t remember). Paint Tool Sai was special in that it had a stablizer, and mirroring/rotating the viewport, and a color-mixing brush, and variable width vector curves, and a couple of cool dockers. Mind you it only had like, 3 filters, but all those other things were HUGE back then. So everyone and their grandmother pirated it until the author actually made an English version, at which point like 90% of people still pirated it. Then the author proceeded to not update it for like… 8 years?, with Paint Tool Sai 2 being in beta for a small ever.

The lowdown is that nowadays many people(mostly teens, so I don’t really want to judge them too much) are still using Paint Tool Sai 1, pirated, and it’s so old it won’t work on windows 10 computers anymore. One of the things that always had bothered me is that there was no program outside of Sai that could handle opening the Sai file format. It was such a popular program, yet noone had seemed to have tried?

So, it seems someone has tried, made a library out of it even. If you look at the readme, the reason noone besides Wunkolo has tried to support it is because sai files aren’t just encoded, no, they’re encrypted. This would be fine if it were video game saves, but a painting program is not a video game, and I was slightly horrified, as it is a technological mechanism put in place to avoid people getting to their artwork that they made rather than just the most efficient way to store it on the computer. So I now feel more compelled to have Krita be able to open these files, so people can actually access them. So I sat down with Boudewijn to figure out how much needs to be done to add it to Krita, and we got some build errors, so we’re delaying this for a bit. Made a phabricator task instead:

T11330 – Implement LibSai for importing PaintTool Sai files

Pressure Calibration Widget (Tuesday)

This was a slightly selfish thing. I had been having trouble with my pen deciding it had a different pressure curve every so often, and adjusting the global tablet curve was, while perfectly possible, getting a bit annoying. I had seen Pressure Calibration widgets in some android programs, and I figured that the way how I tried to figure out my pressure curve(messing with the tablet tester and checking the values) is a little bit too technical for people, so I decided to gather up all my focus and program a little widget that would help with that.

Right now, it asks for a soft stroke, a medium one and a heavy one and then calculates the the desired pressure from that. It’s a bit fiddly though, and I want to make it less fiddly but still friendly in how it guides you to provide the values it needs.

MR 104 – Initial Prototype Pressure Callibration Widget

HDR master class (Tuesday)

(Not really)

So, the sprint was also the first time to test Krita on exciting new hardware. One of these was Krita on an Android device(more on that later), the other was the big HDR setup provided by Intel. I had already played with it back in January, and have since the beginning of Krita’s LUT docker support played with making HDR/Scene Linear images in Krita. Thus when Raghu started painting, I ended up pointing at things to use and explaining some peculiarities(HDR is not just bright, but also wide gamut, and linear, so you are really painting with white).

Then Stefan joined in, and started asking questions, and I had to start my talk again. Then later that day Dmitry bothered David till he tried it out, and I explained everything again!

Generally, you don’t need an HDR setup to paint HDR images, but it does require a good idea of how to use the color management systems and wrapping your head around it. It seems that the latter was a really big hurdle, because artists who had come across as scared of it over IRC were, now that they could see the wide gamut colors, a lot more positive about it.

Animation cycles were shown off as well, but I had run out of juice too much to really appreciate it.

Later that evening we went to the Italian Restaurant on the corner, who miraculously made ordering á la carte work for a group of 25 people. I ended up translating the whole menu for Tusooaa and Tiar, who ended up repaying me by making fun of my habit of pulling apart the nougat candy I got with the coffee. *shakes fist in a not terribly serious way* There were later during the walk also discussions had about burning out and mental health, a rather relevant topic for freelancers.

Open Lucht Museum Arnhem (Wednesday)

I did make a windmill, but… I really should’ve stayed in Deventer and catch up on sleep. Being Dutch, this was my 5th or 7th time I had seen this particular open air museum(there’s another one(Bokrijk) in Belgium where I have been just as many times), but I had wanted to see how other people would react to it. In part because it shows all this old Dutch architecture, and in part because these are actual houses from those periods, they get carefully disassembled and then carefully rebuild brick by brick, beam by beam on the museum terrain, which in itself is quite special.

But all I could think when there was ‘oh man, I want to sleep’. Next time I just need to stay in Deventer, I guess.

I did get to take a peek at other people’s sketchbooks and see, to my satisfaction, I am not the only person to just write notes and todo lists in the sketchbook as well.

At dinner, we mostly talked about the weather, and confusion over the lack of spicyness in the other wise spicy cuisine of Indonesia(which was later revealed to be caused by the waiters not understanding we had wanted a mix of spicy and non-spicy things). And also that box-beds are really weird.

Taking Notes (Thursday Afternoon)

Given the bad decision I had made yesterday to go to the museum, I decided to be less dumb and tell everyone I’d sleep during the morning.

And then when I joined everyone, it turned there had been half a meeting during the morning. And Hellozee was kind of hinting that I should definitely take the notes for the afternoon(looking at his notes and his own description of that day, taking notes had been a bit too intense for him). And later he ranted at me about the text tool, and I told him that ‘Don’t worry, we know it is clunky as hell. Identifying that isn’t what is necessary to fix it’. (We have several problems, the first being the actual font stack itself, so we can have shaping for scripts like those for Arabic and Hindi, then there’s the laying out, so we can have wordwrap and vertical layout for CJK scripts, and only after those are solved we can even start thinking of improving the UI).

Anyhow, the second part of the meeting was about instagram, marketing, and just having a bit of fun with getting people to show off their art they made with Krita. The thing is of course that if you want it to be a little bit of fun, you need to be very thorough in how you handle it. Like, competition could lead to it feeling like a dog-eat-dog style competition, and we also need to make it really clear how to deal with the usual ethics around artists and ‘working for exposure’. Sara Tepes was the one who wants to start up the Krita account for Instagram, which I am really thankful of. She also began with the discussion on this, and I feel a little bad because I pointed out the ethical aspect by making fun of ‘working for exposure’, and I later realized that she was new to the sprint, and maybe that had been a bit too forward.

And then I didn’t get the chance to talk to her afterwards, so I couldn’t apologize. I did get some comments from others that they were glad I brought it up, but still it could’ve been nicer. orz

In the end we came to a compromise that people seemed comfortable with: A cycle of several images, selected from things people explicitly tag to be included on social media, for a short period, and a general page that explains how the whole process works so that there’s no confusion on what and why and how, and that it can just be a fun thing for people to do.

Android Testing (Friday)

I had poked at the android version, but last time it had not yet have graphics acceleration support, so it was really slow. This time I could really sit down and test it. It’s definitely gotten a lot better, and I can see myself or other artists having this as an alternative to a sketchbook to sit down and doodle on while the computer is for the bigger intensive work.

It was also a little funny, when I showed to someone it was pressure sensitive, all the other artists present one-by-one walked over to me to try poke the screen with a stylus. I guess we generally have so much trouble to get pressure to work on desktop devices it’s a little unbelievable it would just work on the mobile device.

T11355 – General feedback Android version.

That evening discussions were mostly about language, and photoshop’s magnetic lasso tool crashing, and that Europeans talk about language a lot.


On Saturday I read an academic book of 300~ pages, something which I had really needed after all this. I felt a lot more clear headed after wards. I had attempted to help Boudewijn with bugtriaging, which is something we usually do on a sprint, but I just couldn’t concentrate.

We were all too tired to talk much on Saturday. I can only remember eating.


On Sunday I spent some time with Boudewijn going through the meeting notes and turning it into the sprint report. Boudewijn then spend 5 times trying to explain the current release schedule to me, and now I have my automated mails setup so people get warned about backporting their fixes and about the upcoming monthly release schedule.

In the evening I read through the Animator’s Survival Kit. We have a bit of an issue where it seems Krita’s animation tools are so intuitive that when it comes to the unintuitive things that are inherent to big projects themselves (ram usage, planning, pipeline), people get utterly confused.

We’ve already been doing a lot of things in that area: making it more obvious when you are running out of ram, making the render dialog a bit better, making onion skins a bit more guiding. But now I am also rewriting the animation page and trying to convey to aspiring animators that they cannot do a one hour 60 fps film in a single krita file, and that they will need to do things like planning. The Animator’s Survival Kit is a book that’s largely about planning, which very little talked about, so hence why it is suggested to aspiring animators a lot, and I was reading it through to make sure I wasn’t about to suggest nonsense.

We had, after all the Indians had left, gone to an Indian restaurant. Discussions were about spicy food, language and Europe.


On Monday I stuck around for the irc meeting and afterwards went home.

It was lovely to meet everyone individually, and each singular conversation I had, had been lovely, but this is really one of those situations where I really need to learn to take more breaks and not be too angry at myself for that. I hope to meet everyone in the future again in a less crowded setting so I can actually have all the fun of meeting fellow contributors and none of the exhausting parts. The todo list we’ve accumulated is a bit daunting, but hopefully we’ll get through it together.

Categories: FLOSS Project Planets

Hook 42: Getting Started With Developer Environments

Planet Drupal - Tue, 2019-08-13 12:51
Getting Started With Developer Environments Lindsey Gemmill Tue, 08/13/2019 - 16:51
Categories: FLOSS Project Planets

Ricardo Mones: When your mail hub password is updated...

Planet Debian - Tue, 2019-08-13 11:30
don't forget to run postmap on your /etc/postfix/sasl_passwd
(repeat 100 times sotto voce or until falling asleep, whatever happens first).
Categories: FLOSS Project Planets

Bhavin Gandhi: Organizing PythonPune Meetups

Planet Python - Tue, 2019-08-13 11:23
One thing I like most about meetups is, you get to meet new people. Talking with people, sharing what they are doing helps a lot to gain more knowledge. It is also a good platform to make connections with people having similar area of interests. I have been attending PythonPune meetup since last 2 years. In this blog post, I will be sharing some history about this group and how I got involved in organizing meetups.
Categories: FLOSS Project Planets

Continuum Analytics Blog: Getting Started with Machine Learning in the Enterprise

Planet Python - Tue, 2019-08-13 10:47

Machine learning (ML) is a subset of artificial intelligence (AI) in which data scientists use algorithms and statistical  models to predict outcomes and/or perform specific tasks. ML models can automatically “learn from” data sets to…

The post Getting Started with Machine Learning in the Enterprise appeared first on Anaconda.

Categories: FLOSS Project Planets

Acro Media: Using Drupal for Headless Ecommerce Customer Experiences

Planet Drupal - Tue, 2019-08-13 10:45

Back in April, BigCommerce, in partnership with Acro Media, announced the release of the BigCommerce for Drupal module. This module effectively bridges the gap between the BigCommerce SaaS ecommerce platform and the Drupal open source content management system. It allows Drupal to be used as the frontend customer experience engine for a headless BigCommerce ecommerce store.

For BigCommerce, this integration provides a new and exciting way to utilize their platform for creating innovative, content-rich commerce experiences that were not possible via BigCommerce alone.

For Drupal, this integration extends the options its users and site-builders have for adding ecommerce functionality into a Drupal site. The flexibility of Drupal combined with the stability and ease-of-use of BigCommerce opens up new possibilities for Drupal that didn’t previously exist.

Since the announcement, BigCommerce and Acro Media have continued to educate and promote this exciting new headless commerce option. A new post on the BigCommerce blog published last week title Leverage Headless Commerce To Transform Your User Experience with Drupal Ecommerce (link below) is a recent addition to this information campaign. The BigCommerce teams are experts in what they do and Acro Media is an expert in open source integrations and Drupal. They asked if we could provide an introduction for their readers to really explain what Drupal is and where it fits in to the headless commerce mix. This, of course, was an opportunity not to be missed and so our teams buckled down together once again to provide readers with the best information possible.

So without further explanation, click the link below to learn how you can leverage headless commerce to transform your user experience with Drupal.

Additional resources:
Categories: FLOSS Project Planets

Continuum Analytics Blog: What’s in a Name? Clarifying the Anaconda Metapackage

Planet Python - Tue, 2019-08-13 10:40

The name “Anaconda” is overloaded in many ways. There’s our company, Anaconda, Inc., the Anaconda Distribution, the anaconda metapackage, Anaconda Enterprise, and several other, sometimes completely unrelated projects (like Red Hat’s Anaconda). Here we hope…

The post What’s in a Name? Clarifying the Anaconda Metapackage appeared first on Anaconda.

Categories: FLOSS Project Planets

Drupal Association blog: Midwest Drupal Summit 2019

Planet Drupal - Tue, 2019-08-13 10:04

It's always wonderful to have Drupal community members gather in my hometown. Ann Arbor, Michigan, USA hosts the Midwest Drupal Summit (MWDS) every year in August since 2016. Previous MWDS events were held in Chicago, IL, Minneapolis, MN, and Madison, WI. This summit is three days of Drupal contribution, collaboration, and fun. The event is small but mighty. As with any contribution-focused Drupal event, MWDS is important because some of the most active community teams of the Drupal project dedicate time to work through challenges and celebrate what makes open source special.

I overheard several topics discussed over the three days.

  • Drupal.org infrastructure - its current state and ideas for improvements

  • Coordination between the Composer Initiative and core maintainers

  • The Automatic Updates initiative sprinting on package signing

  • Ideas for Contribution Credit system enhancements, to expand recognition beyond activity that takes place on Drupal.org

  • Drupal core contributors and the Drupal Association engineering team collaborating on critical Drupal 9 release blockers

  • The Security Team talking through ongoing work

  • Gitlab merge requests workflow and UI

  • Connecting the work that contributors are doing across various projects

  • Fun social events

Group lunch in Ann Arbor.

This opportunity to listen and overhear the thought and care that go into Drupal is one I appreciate. It was fantastic to hear a community member tell the Drupal Association team that they are "impressed with the gitlab work. I created a sandbox and the URL worked." It's one thing to see public feedback on the work done for the community, it's a whole other thing to hear it in person.

Contribution in several forms - talking through ideas, blockers, giving feedback and opinions are just a few ways to participate.

Local midwesterners who take the time to attend MWDS get an opportunity to dive in to contribution on a variety of topics. There are always mentors and subject-matter experts ready to help. My own Drupal core commit happened at a past MWDS - where I gave feedback on an issue from the perspective of a content editor. This year, Wilson S. had a first-time commit on issue #3008029 and usually there's at least one first-time commit. A memorable one being the time Megan (megansanicki) had her first commit, which was also a live commit by Angie Byron (webchick).

Here's what a few participants had to say about their experience:

"I feel inspired as I watch the local community and visitors organically interact and participate with the discussions held around them." ~ Matthew Radcliffe (mradcliffe)

“This was my first Drupal Code Sprint event. Meeting all the great people from near and afar in person was awesome. Matthew Radcliffe helped me overcome my apprehension of contributing to Drupal Core. I look forward to continuing contributing and connecting with the community.” ~ Phill Tran (philltran

"As a recent re-transplant back to Michigan, I wanted to get back in touch with my local Drupal community. Being a FE dev, sometimes it's hard to find things to contribute via core or porting of modules. Some of the content analysis and accessibility testing was really interesting to me. As someone who has not contributed in the past @mradcliff was an excellent teacher on how to get the sprint environment up and running and how to get started on issues." ~ Chris Sands (chrissands)

"As part of the Drupal Association staff, I find MWDS is always a wonderful opportunity to connect with some key community members and create more alignment between DA initiatives and community-driven work. It's also a wonderful time to catch up with Drupal family." ~ Tim Lehnen (hestenet)

"Always the best event of the year." ~ xjm

“I am glad to have met everyone. I had a one-on-one mentoring session with Matthew Radcliffe. It’s priceless!” ~ Wilson Suprapto (wilsonsp)

Did I mention that Chris also became a Drupal Association member during this event?! Thanks Chris!

The Drupal Association engineering team members are in daily contact with the community online. However, in-person events are serendipitous. The insight from community members who have expertise to help Drupal.org improve for everyone is right here in the room. New contributors need only consider that the first step is any move you make to participate. I think this timely tweet I saw over the weekend sums it up:

The great thing about open source is you can often just contribute. Jump in and help - there's tons of documentation help needed, but open issues, bugs, etc. Start small but why not start today?

— Nick Ruffilo (@NickRuffilo) August 10, 2019

Special thanks to Michael Hess for organizing this event and Neha & Danny from University of Michigan for making sure everyone had a fantastic time.

Tim at the Treeline, on a zipline!

For reference, here are the issues that moved forward during the event.

Categories: FLOSS Project Planets

Krita 2019 Sprint: Animation and Workflow BoF

Planet KDE - Tue, 2019-08-13 10:00
Last week we had a huge Krita Sprint in Deventer. A detailed report is written by Boudewijn here, and I will concentrate on the Animation and Workflow discussion we had on Tuesday, when Boudewijn was away, meeting and managing people arriving. The discussion was centered around Steven and his workflow, but other people joined during the discussion: Noemie, Scott, Raghavendra and Jouni.
(Eternal) Eraser problemSteven brought up a point that current brush options "Eraser Switch Size" and "Eraser switch Opacity" are buggy, so it winded up an old topic again. These options were always considered as a workaround for people who need a distinct eraser tool/brush tip, and they were always difficult to maintain.

After a long discussion with broader circle of people we concluded that "Ten Brushes Plugin" can be used as an alternative for a separate eraser tool. One should just assign some eraser-behaving preset to the 'E' key using this plugin. So we decided that we need the following steps:
Proposed solution:
  1. Ten Brushes Plugin should have some eraser preset configured by default
  2. This eraser preset should be assigned to "Shift+E" by default. So when people ask about "Eraser Tool" we could just tell them "please use Shift+E".
  3. [BUG] Ten brushes plugin doesn't reset back to a normal brush when the user picks/changes painting color, like normal eraser mode does.
  4. [BUG] Brush slot numbering is done in 1,2,3,...,0 order, which is not obvious. It should be 0,1,2,...,9 instead.
  5. [BUG] It is not possible to set up a shortcut to the brush preset right in the Ten Brushes Plugin itself. The user should go to the settings dialog.
Stabilizer workflow issuesIn Krita stabilizer settings are global. That is, they are applied to whatever brush preset you use at the moment. That is very inconvenient, e.g. when you do precise line art. If you switch to a big eraser to fix up the line, you don't need the same stabilization as in the liner. Proposed solution:
  1. The stabilizer setting are still in the Tool Options docker, we don't move them into the Brush Settings (because sometimes you need to make them global?)
  2. Brush Preset should have a checkbox "Save Stabilizer Settings" that will load/save the stabilizer settings when the preset is selected/unselected.
  3. The editing of these (basically) brush-based setting will happen in the tool option.
  • I'm not sure if the last point is sane. Technically, we can move the stabilizer settings into the brush preset. And if the user wants to use the same stabilizer settings in different presets, he can just lock the corresponding brush settings (we have an special lock icon for that). So should we move the stabilizer settings into the brush preset editor or keep it in the tool options?
Cut Brush featureSometimes painter need a lot of stamps for often-used objects. E.g. a head or a leg for an animation character. A lot of painters use brush preset selector as a storage for that. That is, if you need a copy of a head on another frame, just select the preset and click in a proper position. We already have stamp brushes and they work quite well, we just need streamline workflow a bit.Proposed solution:
  1. Add a shortcut for converting the current selection into a brush. It should in particular:
    • create a brush from the current selection, add default name to it and create an icon from the selection itself
    • deselect the current selection. It is needed to ensure that the user can paint right after pressing this shortcut
  2. There should be shortcuts to rotate, scale current brush
  3. There should be a shortcut for switching prev/next dab of the animated brush
  4. Brush needs a special outline mode, when it paints not an outline, but a full colorful preview. It should be activated by some modifier (that is pres+hold).
  5. Ideally, if multiple frames are selected, the created brush should become animated. That would allow people to create "walking brush" or "raining brush".
Multiframe editing modeOne of the major things Krita's animation still lack is multiframe editing mode, that is ability to transform/edit multiple frames at once. We discussed it and ended up with a list of requirements.Proposed solution:
  1. By default all the editing tools transform the current frame only
  2. The only exception is "Image" operations, which operate on the entire image, e.g. scale, rotate, change color space. These operations work on all existing frames.
  3. If there is more than one frame selected in the timeline, then operation/tool should be applied on these frames only.
  4. We need a shortcut/action in the frame's (or timeline's layer) context menu: "Selection all frames"
  5. Tools/Actions that should support multiframe operations:
    • Brush Tool (low-priority)
    • Move Tool
    • Transform Tool
    • Fill Tool (may be efficiently used on multiple frames with erase-mode-trick)
    • Filters
    • Copy-Paste selection (now we can only copy-paste frames, not selections)
    • Fill with Color/Pattern/Clear
BUGSThere is also a set of unsorted bugs that we found out during the discussion:
  1. On Windows multiple main windows don't have unique identifier, so they are no distinguishable from OBS.
  2. Animated brush spits a lot of dabs in the beginning of the stroke
  3. Show in Timeline should be default for all the new layers
  4. Fill Tool is broken with Onion Skins (BUG:405753)
  5. Transform Tool is broken with Onion Skins (BUG:408152)
  6. Move Tool is broken with Onion Skins (BUG:392557)
  7. When copy-paste frames on the timeline, in-betweens should override the destination (and technically remove everything that was in the destination position). Right now source and destination keyframes are merged. That is not what animators expect.
  8. Changing "End" of animation in "Animation" docker doesn't update timeline's scroll area. You need to create a new layer to update it.
  9. Delayed Save dialog doesn't show the name of the stroke that delays it (and sometimes the progress bar as well). It used to work, but now is broken.
  10. [WISH] We need "Insert pasted frames", which will not override destination, but just offset it to the right.
  11. [WISH] Filters need better progress reporting
  12. [WISH] Auto-change the background of the Text Edit Dialog, when the text color looks alike.

As a conclusion, it was very nice to be at the sprint and to be able to talk to real painters! Face to face meetings are really important for getting such detailed lists of new features we need to implement. If we did this discussion through Phabricator we would spend weeks on it :)

Categories: FLOSS Project Planets

KDE.org Applications Site

Planet KDE - Tue, 2019-08-13 10:00

I’ve updated the kde.org/applications site so KDE now has web pages and lists the applications we produce.

In the update this week it’s gained Console apps and Addons.

Some exciting console apps we have include Clazy, kdesrc-build, KDebug Settings (a GUI app but has no menu entry) and KDialog (another GUI app but called from the command line).

This KDialog example takes on a whole new meaning after watching the Chernobyl telly drama.

And for addon projects we have stuff like File Stash, Latte Dock and KDevelop’s addons for PHP and Python.

At KDE we want to be a great place to be a home for your project and this is an important part of that.


Categories: FLOSS Project Planets

Real Python: Traditional Face Detection With Python

Planet Python - Tue, 2019-08-13 10:00

Computer vision is an exciting and growing field. There are tons of interesting problems to solve! One of them is face detection: the ability of a computer to recognize that a photograph contains a human face, and tell you where it is located. In this course, you’ll learn about face detection with Python.

To detect any object in an image, it is necessary to understand how images are represented inside a computer, and how that object differs visually from any other object.

Once that is done, the process of scanning an image and looking for those visual cues needs to be automated and optimized. All these steps come together to form a fast and reliable computer vision algorithm.

In this course, you’ll learn:

  • What face detection is
  • How computers understand features in images
  • How to quickly analyze many different features to reach a decision
  • How to use a minimal Python solution for detecting human faces in images

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Sven Hoexter: Debian/buster on HPE DL360G10 - interfaces change back to ethX

Planet Debian - Tue, 2019-08-13 09:24

For yet unknown reasons some recently installed HPE DL360G10 running buster changed back the interface names from the expected "onboard" based names eno5 and eno6 to ethX after a reboot.

My current workaround is a link file which kind of enforces the onboard scheme.

$ cat /etc/systemd/network/101-onboard-rd.link [Link] NamePolicy=onboard kernel database slot path MACAddressPolicy=none

The hosts are running the latest buster kernel

Linux foobar 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linu

A downgrade of the kernel did not change anything. So I currently like to believe this is not related a kernel change.

I tried to collect a few information on one of the broken systems while in a broken state:

root@foobar:~# SYSTEMD_LOG_LEVEL=debug udevadm test-builtin net_setup_link /sys/class/net/eth0 === trie on-disk === tool version: 241 file size: 9492053 bytes header size 80 bytes strings 2069269 bytes nodes 7422704 bytes Load module index Found container virtualization none. timestamp of '/etc/systemd/network' changed Skipping overridden file '/usr/lib/systemd/network/99-default.link'. Parsed configuration file /etc/systemd/network/99-default.link Created link configuration context. ID_NET_DRIVER=i40e eth0: No matching link configuration found. Builtin command 'net_setup_link' fails: No such file or directory Unload module index Unloaded link configuration context. root@foobar:~# udevadm test-builtin net_id /sys/class/net/eth0 2>/dev/null ID_NET_NAMING_SCHEME=v240 ID_NET_NAME_MAC=enx48df37944ab0 ID_OUI_FROM_DATABASE=Hewlett Packard Enterprise ID_NET_NAME_ONBOARD=eno5 ID_NET_NAME_PATH=enp93s0f0

Most interesting hint right now seems to be that /sys/class/net/eth0/name_assign_type is invalid While on sytems before the reboot that breaks it, and after setting the .link file fix, contains a 4.

Since those hosts were intially installed with buster there are no remains on any ethX related configuration present. If someone has an idea what is going on write a mail (sven at stormbind dot net), or blog on planet.d.o.

I found a vaguely similar bug report for a Dell PE server in #929622, though that was a change from 4.9 (stretch) to the 4.19 stretch-bpo kernel and the device names were not changed back to the ethX scheme, and Ben found a reason for it inside the kernel. Also the hardware is different using bnxt_en, while I've tg3 and i40e in use.

Categories: FLOSS Project Planets

Specbee: AMP It Up! The Why and How of Drupal AMP (And what it can do to your website)

Planet Drupal - Tue, 2019-08-13 09:08
AMP It Up! The Why and How of Drupal AMP (And what it can do to your website) Shefali Shetty 13 Aug, 2019 Top 10 best practices for designing a perfect UX for your mobile app

When the open-source Accelerated Mobile Pages (AMP) project was launched in October 2015, Google AMP was often compared to Facebook's Instant Articles. Nonetheless, both of the tech-giants share a common goal – to make web pages load faster. While AMP can be reached with a web URL, Facebook’s Instant Articles aimed only at easing the pain for their app-users. Teaming up with some powerful launch partners in the publishing and technology sectors, Google AMP aimed to impact the future of content distribution on mobile devices.

Fast forward to today, and Google AMP is the hottest thing on the internet. With over 25 million website domains that have published over 4 Billion AMP pages, it did not take long for the project to be a huge success. Comprising of two main features; Speed and Support to Monetization of Objects, AMPs implications are far reaching for enterprise businesses, marketers, ecommerce and every other big and small organizations. With great features and the fact that its origin as a Google Initiative, it is no surprise that the AMP pages get featured in Google SERP more prominently. 

What is AMP?

With the rapid surge in mobile users, the need to provide a website-like user experience does not just cut it. Today mobile user’s come with a smaller attention-span and varied internet speeds. Businesses can cater to each of these challenge with a fast-loading, light-weight and an app-like website with Google AMP.

AMP is an open-source framework that simplifies the HTML, streamlines CSS rules, restricts use of Javascript (can use AMP’s component library instead) and delivers pages via a Google AMP cache (a proxy-based Content Delivery Network).

Why AMP??

Impacting the technical architecture of digital assets, Google's open source initiative aims to provide streamlined web pages to mobile browsers and other apps.

It is Fast, like Really Fast

Google AMP loads about twice as fast as a normal comparable mobile page and the latency is as less as one-tenth. Intended to provide the fastest experience for mobile users, customers will be able to access content faster, and they are more likely to stay on the page to make a purchase or enquire about your service, because they know it won't take long.

An Organic Boost

Eligibility for the AMP carousal that rests above the other search results on Google SERP, resulting in a substantial increase in organic result and traffic is a major boost for the visibility of an organization. Though not responsible for increasing the page authority and domain authority, Google AMP plays a key role in sending far more traffic your way.


The fact that AMP leverages and not disrupts the existing web infrastructure of a website, makes the cost of adopting AMP much lesser than the competing technologies. In return, Google AMP enables better user experience which translates to better conversion rates on mobile devices.

Drupal & AMP

With better user engagement, higher dwell time and easy navigation between content benefits, businesses are bound to drive more traffic with AMP-friendly pages and increase their revenue. The AMP module is especially useful for marketers as it is a great addition to optimize their Drupal SEO efforts.

AMP produces HTML that makes the web a faster place. Implementing the AMP module in Drupal is really simple. Just download, enable and configure!
Before you begin with the integration of AMP module with Drupal, you need -
AMP Module : The AMP module mainly handles the conversion of regular Drupal HTML pages to AMP-complaint pages.

Two main components of AMP module:

AMP Module : The AMP module mainly handles the conversion of regular Drupal HTML pages to AMP-complaint pages.
Two main components of AMP module:

AMP Theme: I'm sure you have come across AMP HTML and its standards. The one that are responsible for your content to look effective and perform well on mobile. The Drupal AMP theme produces the mark up required by these standards for websites looking to perform well in the mobile world. Also, AMP theme allows creation of custom-made AMP pages.

AMP PHP Library: Consisting of the AMP base theme and the ExAMPle sub-theme, the Drupal AMP PHP Library handles the final corrections. Users can also create their own AMP sub-theme from scratch, or modify the default ExAMPle sub-theme for their specific requirements.

How to setup AMP with Drupal?

Before you integrate AMP with Drupal, you need to understand that AMP does not replace your entire website. Instead, at its essence, the AMP module provides a view mode for content types, which is displayed when the browser asks for an AMP version.

Download the AMP Module

With your local prepped up, type the following terminal command:

drush dl amp, amptheme, composer_manager

This command will download the AMP module, the AMP theme and the Composer Manager module (suppose if you do not have the Composer Manager already).

If you have been a user of Drupal 8,  you are probably familiar with Composer and its function as a packaging tool for PHP that installs dependencies for a project. The composer is used to install a PHP library that converts raw HTML into AMP HTML. Also, the composer will help to get that library working with Drupal.

However, as the AMP module does not explicitly require Composer Manager for a dependency, alternate workflows can make use of module Composer files without using Composer Manager.

Next, enable the items that are required to get started:

drush en composer_manager, amptheme, ampsubtheme_example

Before enabling the AMP module itself, an AMP sub-theme needs to be enabled. The default configuration for the AMP module sets the AMP Theme to ‘ExAMPle subtheme.’

How to Enable AMP Module?

The AMP module for Drupal can be enabled using Drush. Once the module is enabled, the Composer Manager will take care of the downloading of the other AMP libraries and its dependencies.

drush en amp


Once everything is installed and enabled, AMP needs to be configured using a web interface before the Drupal AMP pages can be displayed. First up, you need to decide which content types should have an AMP version. You might not need it for all of them. Enable particular content type by clicking on the “Enable AMP in Custom Display Settings” link. On the next page, open the “Custom Display Settings” fieldset. Check the AMP box, then click Save.

Setting an AMP Theme

Once the AMP module and content type is configured, it is time to select a theme for AMP pages and configure it. The view modules and the field formatters of the Drupal AMP module take care of the main content of the page. The Drupal AMP theme, on the other hand, changes the mark-up outside the main content area of the page.

Also, the Drupal AMP themes enables you to create custom styles for your AMP pages. On the main AMP config page, make sure that the setting for the AMP theme is set to the ExAMPle Subtheme or the custom AMP subtheme that you created.

One thing is certain. Google favours websites that provide the best experience possible for mobile users. With tough competition from Facebook Instant Articles and Apple News, Google AMP aims to decrease page loading time to influence the user experience. Drupal websites can leverage the AMP module that can produce AMP HTML and substantially speed up web page loading time. Beyond the success of publishers and ecommerce websites on Drupal 8,  a number of other websites are utilizing AMP along with the inclusion of progressive web apps. With a bright future ahead, Google AMP will be one of the strongest tools for traffic generation and conversions.

Drupal Planet Shefali ShettyApr 05, 2017 Subscribe For Our Newsletter And Stay Updated Subscribe Shefali ShettyApr 05, 2017 Recent Posts Image AMP It Up! The Why and How of Drupal AMP (And what it can do to your website) Image Setup Responsive Images in Drupal 8 - A Step-by-Step Guide Image Top 15 Drupal Distributions for Accelerating Your Web Development Process Explore Our Drupal Services TAKE ME THERE Featured Success Stories

Know more about our technology driven approach to recreate the content management workflow for [24]7.ai


Find out how we transformed the digital image of world’s largest healthcare provider, an attribute that defined their global presence in the medical world.


Develop an internal portal aimed at encouraging sellers at Flipkart to obtain latest insights with respect to a particular domain.

Categories: FLOSS Project Planets

Jacob Rockowitz: Governments should pay to fix accessibility issues in Drupal and Open Source projects

Planet Drupal - Tue, 2019-08-13 09:03

We need to nudge governments to start funding and fixing accessibility issues in the Open Source projects that are being used to build digital experiences. Most governments are required by law to build accessible websites and applications. Drupal’s commitment to accessibility is why Drupal is used by a lot of governments to create ambitious digital experiences.

Governments have complex budgeting systems and policies, which can make it difficult for them to contribute to Open Source. At the same time, there are many consulting agencies specializing in Drupal for government, and maybe these organizations need to consider fixing accessibility issues on behalf of their clients.

If an agency started contributing, funding, and fixing accessibility issues in Drupal core and Open Source, they’d be showing their government clients that they are indeed experts who understand the importance of getting involved in the Open Source community.

So I have started this blog post with a direct ask for governments to pay to fix accessibility issues without a full explanation as to why. It helps to step back and look at the bigger context: “Why should governments fix accessibility issues in Drupal Core?”

Governments are using Drupal

This summer’s DrupalGovCon in Washington, DC was the largest Drupal event on the East Coast of the United States. The conference was completely free to attend with 1500 registrants. There were dozens of sponsors promoting their Drupal expertise and services. My presentation, Webform for Government, included a section about accessibility. There were also four sessions dedicated to accessibility.

Besides presenting at DrupalGovCon, I...Read More

Categories: FLOSS Project Planets

Stack Abuse: Using Django Signals to Simplify and Decouple Code

Planet Python - Tue, 2019-08-13 08:25

Systems are getting more complex as time goes by and this warrants the need to decouple systems more. A decoupled system is easier to build, extend, and maintain in the long run since not only does decoupling reduce the complexity of the system, each part of the system can be managed individually. Fault tolerance has also enhanced since, in a decoupled system, a failing component does not drag down the entire system with it.

Django is a powerful open-source web framework that can be used to build large and complex systems, as well as small ones. It follows the model-template-view architectural pattern and it is true to its goal of helping developers achieve the delivery of complex data-driven web-based applications.

Django enables us to decouple system functionality by building separate apps within a project. For instance, we can have a shopping system and have separate apps that handle accounts, emailing of receipts, and notifications, among other things.

In such a system, several apps may be need to perform an action when certain events take place. One event can occur when a customer places an order. For exmaple, we will need to notify the user via email and also send the order to the supplier or vendor, at the same time we can be able to receive and process payments. All these events happen at the same time and since our application is decoupled, we need to keep every component in sync, but how do we achieve this?

Django Signals come in handy in such a situation, all that needs to happen is a signal is sent when a user places an order, and every related or affected component listens out for it and performs its operations. Let us explore more about signals in this post.

Signals at a Glance

Django Signals are an implementation of the Observer Pattern. In such a design pattern, a subscription mechanism is implemented where multiple objects are subscribed to, or "observing", a particular object and any events that may happen to it. A good analogy is how all subscribers to a YouTube channel get a notification when a content creator uploads new content.

Through a "signal dispatcher", Django is able to distribute signals in a decoupled setup to registered "receivers" in the various system components. Signals are registered and triggered whenever certain events occur, and any listeners to that event will get notified that event has occurred, alongside receiving some contextual data within the payload that may be relevant to the functionality of the receiver. A receiver can be any Python function or method. More on this later on.

Aside from the signals dispatcher, Django also ships with some useful signals that we can listen on. They include:

  • post_save, which is sent out whenever a new Django model has been created and saved. For instance, when a user signs up or uploads a new post,
  • pre_delete, which is sent out just before a Django model is deleted. A good scenario would be when a user is deleting a message or their account,
  • request_finished, which is fired whenever Django completes serving an HTTP request. This can range from opening the website or accessing a particular resource.

Another advantage of Django is that it is a highly customizable framework. In our case, we can create our custom signals and use the built-in system to dispatch and receive them in our decoupled system. In the demo section, we will subscribe to some of Django's built-in signals, and also create some custom ones of our own.

When to Use Signals

We have already identified what Django Signals are and how they work, but as is with any other framework feature, it is not meant to be used at every turn. There are particular scenarios where it is highly recommended that we use Django signals, and they include:

  • When we have many separate pieces of code interested in the same events, a signal would help distribute the event notification as opposed to us invoking all of the different pieces of code at the same point, which can get untidy and introduce bugs
  • We can also use Django signals to handle interactions between components in a decoupled system as an alternative to interaction through RESTful communication mechanisms
  • Signals are also useful when extending third-party libraries where we want to avoid modifying them, but need to add extra functionality
Advantages of Signals

Django Signals simplify the implementation of our decoupled systems in various ways. They help us implement reusable applications and instead of reimplementing functionality severally, or modifying other parts of the system, we can just respond to signals without affecting other code. This way, components of a system can be modified, added, or removed without touching the existing codebase.

Signals also provide a simplified mechanism for keeping different components of a decoupled system in sync and up to date with each other.

Demo Project

In our demo project, we will build a simple jobs board where users will access the site, view available jobs and pick a job posting to subscribe to. The users will subscribe just by submitting their email address and will be notified of any changes to the job. For instance, if the requirements change, the job opening is closed, or if the job posting is taken down. All of these changes will be performed by an admin who will have a dashboard to create, update, and even take down job postings.

In the spirit of decoupling our application, we will build the main Jobs Board application and a separate Notifications application that will be tasked with notifying users whenever required. We will then use signals to invoke functionality in the Notifications app from the main Jobs Board app.

Another testament of Django's expansive feature-set is the built-in administration dashboard that our administrators will use to manage jobs. Our work on that front is greatly reduced and we can prototype our application faster.

Project Setup

It is good practice to build Python projects in a virtual environment so that we work in an isolated environment that does not affect the system's Python setup, so we'll use Pipenv.

Let us first set up our environment:

# Set up the environment $ pipenv install --three # Activate the virtual environment $ pipenv shell # Install Django $ pipenv install django

Django comes with some commands that help us perform various tasks such as creating a project, creating apps, migrating data, and testing code, among others. To create our project:

# Create the project $ django-admin startproject jobs_board && cd jobs_board # Create the decoupled applications $ django-admin startapp jobs_board_main $ django-admin startapp jobs_board_notifications

The commands above will create a Django project with two applications within it, which are decoupled from each other but can still work together. To confirm that our setup was successful, let us migrate the default migrations that come with Django and set up our database and tables:

$ python manage.py migrate $ python manage.py runserver

When we access the local running instance of our Django project, we should see the following:

This means we have set up our Django project successfully and can now start implementing our logic.


Django is based on a model-view-template architecture pattern, and this pattern will also guide our implementation. We will create models to define our data, then implement views to handle data access and manipulation, and finally templates to render our data to the end-user on the browser.

To have our applications integrated into the main Django application, we have to add them to the jobs_board/settings.py under INSTALLED_APPS, as follows:

INSTALLED_APPS = [ # Existing apps remain... # jobs_board apps 'jobs_board_main', 'jobs_board_notifications', ] Part 1: The Main Jobs Board App

This is where the bulk of our system's functionality will reside and it will be the interaction point with our users. It will contain our models, views and templates and some bespoke signals that we will use to interact with the Notifications app.

Let us start by creating our models in jobs_board_main/models.py:

# jobs_board_main/models.py class Job(models.Model): company = models.CharField(max_length=255, blank=False) company_email = models.CharField(max_length=255, blank=False) title = models.CharField(max_length=255, blank=False) details = models.CharField(max_length=255, blank=True) status = models.BooleanField(default=True) date_created = models.DateTimeField(auto_now_add=True) date_modified = models.DateTimeField(auto_now=True) class Subscriber(models.Model): email = models.CharField(max_length=255, blank=False, unique=True) date_created = models.DateTimeField(auto_now_add=True) date_modified = models.DateTimeField(auto_now=True) class Subscription(models.Model): email = models.CharField(max_length=255, blank=False, unique=True) user = models.ForeignKey(Subscriber, related_name="subscriptions", on_delete=models.CASCADE) job = models.ForeignKey(Job, related_name="jobs", on_delete=models.CASCADE) date_created = models.DateTimeField(auto_now_add=True) date_modified = models.DateTimeField(auto_now=True)

We create a model to define our Job posting, which will only have a company name and the job details alongside the status of the job opening. We will also have a model to store our subscribers by only taking their email addresses. The Subscribers and the Jobs come together through the Subscription model where we will store details on subscriptions to Job postings.

With our models in place, we need to make migrations and migrate them to have the tables created in the database:

$ python manage.py makemigrations $ python manage.py migrate

Next we move on to the view section of our application. Let's create a view to display all job postings, and another to display individual job postings where users can subscribe to them by submitting their emails.

We will start by creating the view that will handle the displaying of all our jobs:

# jobs_board_main/views.py from .models import Job def get_jobs(request): # get all jobs from the DB jobs = Job.objects.all() return render(request, 'jobs.html', {'jobs': jobs})

For this project we will use function-based views, the alternative being class-based views, but that is not part of this discussion. We query the database for all the jobs and respond to the request by specifying the template that will render the jobs and also including the jobs in the response.

Django ships with the Jinja templating engine which we will use to create the HTML files that will be rendered to the end-user. In our jobs_board_main application, we will create a templates folder that will host all the HTML files that we will render to the end-users.

The template to render all the jobs will display all jobs with links to individual job postings, as follows:

<!-- jobs_board_main/templates/jobs.html --> <!DOCTYPE html> <html> <head> <title>Jobs Board Homepage</title> </head> <body> <h2> Welcome to the Jobs board </h2> {% for job in jobs %} <div> <a href="/jobs/{{ job.id }}">{{ job.title }} at {{ job.company }}</a> <p> {{ job.details }} </p> </div> {% endfor %} </body> </html>

We have created the Job model, the get_jobs view to get and display all the views, and the template to render the jobs listing. To bring all this work together, we have to create an endpoint from which the jobs will be accessible, and we do so by creating a urls.py file in our jobs_board_main_application:

# jobs_board_main/urls.py from django.urls import path from .views import get_jobs urlpatterns = [ # All jobs path('jobs/', get_jobs, name="jobs_view"), ]

In this file, we import our view, create a path and attach our view to it. We will now register our applications URLs in the main urls.py file in the jobs_board project folder:

# jobs_board/urls.py urlpatterns = [ path('admin/', admin.site.urls), path('', include('jobs_board_main.urls')), # <--- Add this line ]

Our project is ready to be tested now. This is what we get when we run the application and navigate to localhost:8000/jobs:

We currently have no jobs in place. Django ships with an administration application that we can use to perform our data entry. First, we start by creating a superuser:

With the superuser created, we need to register our models in the admin.py file in our jobs_board_main application:

# jobs_board_main/admin.py from django.contrib import admin from .models import Job # Register your models here. admin.site.register(Job)

We restart our application and navigate to localhost:8000/admin and log in with the credentials we just set up. This is the result:

When we click on the plus sign on the "Jobs" row, we get a form where we fill in details about our job posting:

When we save the job and navigate back to the jobs endpoint, we are greeted by the job posting that we have just created:

We will now create the views, templates, and URLs to display a single job and also allow users to subscribe by submitting their email.

Our jobs_board_main/views.py will be extended as follows:

# jobs_board_main/views.py # previous code remains def get_job(request, id): job = Job.objects.get(pk=id) return render(request, 'job.html', {'job': job}) def subscribe(request, id): job = Job.objects.get(pk=id) sub = Subscriber(email=request.POST['email']) sub.save() subscription = Subscription(user=sub, job=job) subscription.save() payload = { 'job': job, 'email': request.POST['email'] } return render(request, 'subscribed.html', {'payload': payload})

We will also need to create the template for a single view of a job posting in templates/job.html, which includes the form that will take in a user's email and subscribe them to the job posting:

<!-- jobs_board_main/templates/job.html --> <html> <head> <title>Jobs Board - {{ job.title }}</title> </head> <body> <div> <h3>{{ job.title }} at {{ job.company }}</h3> <p> {{ job.details }} </p> <br> <p>Subscribe to this job posting by submitting your email</p> <form action="/jobs/{{ job.id }}/subscribe" method="POST"> {% csrf_token %} <input type="email" name="email" id="email" placeholder="Enter your email"/> <input type="submit" value="Subscribe"> </form> <hr> </div> </body> </html>

Once a user subscribes to a job, we will need to redirect them to a confirmation page whose subscribed.html template will be as follows:

<!-- jobs_board_main/templates/subscribed.html --> <!DOCTYPE html> <html> <head> <title>Jobs Board - Subscribed</title> </head> <body> <div> <h3>Subscription confirmed!</h3> <p> Dear {{ payload.email }}, thank you for subscribing to {{ payload.job.title }} </p> </div> </body> </html>

Finally, our new functionality will need to exposed via endpoints which we will append to our existing jobs_board_main/urls.py as follows:

# jobs_board_main/urls.py from .views import get_jobs, get_job, subscribe urlpatterns = [ # All jobs path('jobs/', get_jobs, name="jobs_view"), path('jobs/<int:id>', get_job, name="job_view"), path('jobs/<int:id>/subscribe', subscribe, name="subscribe_view"), ]

We can now test our main Jobs Board application by viewing the job posts listings, clicking on one, and submitting an email address that will receive updates.

Now that we have a working application, it is time to bring in Django Signals and notify users/subscribers when certain events take place. Job postings are tied to a certain company whose email we record, we want to notify them when a new user subscribes to their job posting. We also want to notify subscribed users when a job posting is taken down.

To notify users when a job posting is taken down or deleted, we will utilize Django's built-in post_delete signal. We will also create our signal called new_subscriber that we will use to notify companies when users subscribe to their job posting.

We create our custom signals by creating a signals.py file in our jobs_board_main application:

# jobs_board_main/signals.py from django.dispatch import Signal new_subscriber = Signal(providing_args=["job", "subscriber"])

That's it! Our custom signal is ready to be invoked after a user has successfully subscribed to a job posting as follows in our jobs_board_main/views.py file:

# jobs_board_main/views.py # Existing imports and code are maintained and truncated for brevity from .signals import new_subscriber def subscribe(request, id): job = Job.objects.get(pk=id) subscriber = Subscriber(email=request.POST['email']) subscriber.save() subscription = Subscription(user=subscriber, job=job, email=subscriber.email) subscription.save() # Add this line that sends our custom signal new_subscriber.send(sender=subscription, job=job, subscriber=subscriber) payload = { 'job': job, 'email': request.POST['email'] } return render(request, 'subscribed.html', {'payload': payload})

We don't have to worry about the pre_delete signal as Django will send that for us automatically just before a job posting is deleted. The reason we are using pre_delete and not post_delete signal is because, when a Job is deleted, all linked subscriptions are also deleted in the process and we need that data before they are also deleted.

Let us now consume the signals that we have just sent in a separate jobs_board_notifications app.

Part 2: The Jobs Board Notifications App

We already created the jobs_board_notifications application and connected it to our Django project. In this section, we will consume the signals sent from our main application and send out the notifications. Django has built-in functionality to send out emails, but for development purposes, we will print the messages to the console instead.

Our jobs_board_notifications application does not need user interaction, therefore, we do not need to create any views or templates for that purpose. The only goal is for our jobs_board_notifications is to receive signals and send out notifications. We will implement this functionality in our models.py since it gets imported early when the application is starting.

Let us receive our signals in our jobs_board_notifications/models.py:

# jobs_board_notifications/models.py. from django.db.models.signals import pre_delete from django.dispatch import receiver from jobs_board_main.signals import new_subscriber from jobs_board_main.models import Job, Subscriber, Subscription @receiver(new_subscriber, sender=Subscription) def handle_new_subscription(sender, **kwargs): subscriber = kwargs['subscriber'] job = kwargs['job'] message = """User {} has just subscribed to the Job {}. """.format(subscriber.email, job.title) print(message) @receiver(pre_delete, sender=Job) def handle_deleted_job_posting(**kwargs): job = kwargs['instance'] # Find the subscribers list subscribers = Subscription.objects.filter(job=job) for subscriber in subscribers: message = """Dear {}, the job posting {} by {} has been taken down. """.format(subscriber.email, job.title, job.company) print(message)

In our jobs_board_notifications, we import our custom signal, the pre_save signal, and our models. Using the @receiver decorator, we capture the signals and the contextual data passed with them as keyword arguments.

Upon receiving the contextual data, we use it to dispatch the "emails" (recall that we're just printing to the console for the sake of simplicity) to subscribers and companies when a user subscribes and a job posting is deleted by responding to the signals we sent out.


Once we have created a job in our admin dashboard, it is available for users to subscribe. When users subscribe, the following email is sent out from the jobs_board_notifications application to the company that owns the posting:

This is proof that our new_subscriber signal was sent out from the jobs_board_main application and received by the jobs_board_notifications application.

When a job posting is deleted, all users who subscribed to the job posting get notified via email, as follows:

Django's pre_delete signal came in handy and our handler sent out notifications to the subscribed users that the particular job posting has been taken down.


In this article we have built a Django project with two applications which communicate through Django Signals in response to certain events. Our two applications are decoupled and the complexity in communication between our applications has been greatly reduced. When a user subscribes to a job posting, we notify the company. In turn, when a job posting has been deleted, we notify all subscribed customers that the job posting has been taken down.

However, there are some things we should have in mind when utilizing Django Signals. When signals are not well documented, new maintainers may have a hard time identifying the root cause of certain issues or unexpected behavior. Therefore, when signals are used in an application, it is a good idea to document the signals used, where they are received, and the reason behind them. This will help anyone maintaining the code to understand application behavior and solve issues faster and better. Also, it is helpful to note that signals are sent out synchronously. They are not executed in the background or by any asynchronous jobs.

With all this information about Django's Signals and the demo project, we should be able to harness the power of Signals in our Django web projects.

The source code for this project is available here on Github.

Categories: FLOSS Project Planets

Codementor: Top 9 Django Concepts - Part 1: 4 Mins

Planet Python - Tue, 2019-08-13 04:51
The 1st part of a 3 part series on 9 concepts of Django to help any aspiring Django developer to accelerate their learnings
Categories: FLOSS Project Planets

Agaric Collective: Migrating dates into Drupal

Planet Drupal - Tue, 2019-08-13 03:19

Today we will learn how to migrate dates into Drupal. Depending on your field type and configuration, there are various possible combinations. You can store a single date or a date range. You can store only the date component or also include the time. You might have timezones to take into account. Importing the node creation date requires a slightly different configuration. In addition to the examples, a list of things to consider when migrating dates is also presented.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD date whose machine name is ud_migrations_date. The migration to execute is udm_date. Notice that this migration writes to a content type called UD Date and to three fields: field_ud_date, field_ud_date_range, and field_ud_datetime. This content type and fields will be created when the module is installed. They will also be removed when the module is uninstalled. The module itself depends on the following modules provided by Drupal core: datetime, datetime_range, and migrate.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type and fields are automatically created.

PHP date format characters

To migrate dates, you need to be familiar with the format characters of the date PHP function. Basically, you need to find a pattern that matches the date format you need to migrate to and from. For example, January 1, 2019 is described by the F j, Y pattern.

As mentioned in the previous post, you need to pay close attention to how you create the pattern. Upper and lowercase letters represent different things like Y and y for the year with four-digits versus two-digits, respectively. Some date components have subtle variations like d and j for the day with or without leading zeros respectively. Also, take into account white spaces and date component separators. If you need to include a literal letter like T it has to be escaped with \T. If the pattern is wrong, an error will be raised, and the migration will fail.

Date format conversions

For date conversions, you use the format_date plugin. You specify a from_format based on your source and a to_format based on what Drupal expects. In both cases, you will use the PHP date function's format characters to assemble the required patterns. Optionally, you can define the from_timezone and to_timezone configurations if conversions are needed. Just like any other migration, you need to understand your source format. The following code snippet shows the source and destination sections:

source: plugin: embedded_data data_rows: - unique_id: 1 node_title: 'Date example 1' node_creation_date: 'January 1, 2019 19:15:30' src_date: '2019/12/1' src_date_end: '2019/12/31' src_datetime: '2019/12/24 19:15:30' destination: plugin: 'entity:node' default_bundle: ud_dateNode creation time migration

The node creation time is migrated using the created entity property. The source column that contains the data is node_creation_date. An example value is January 1, 2019 19:15:30. Drupal expects a UNIX timestamp like 1546370130. The following snippet shows how to do the transformation:

created: plugin: format_date source: node_creation_date from_format: 'F j, Y H:i:s' to_format: 'U' from_timezone: 'UTC' to_timezone: 'UTC'

Following the documentation, F j, Y H:i:s is the from_format and U is the to_format. In the example, it is assumed that the source is provided in UTC. UNIX timestamps are expressed in UTC as well. Therefore, the from_timezone and to_timezone are both set to that value. Even though they are the same, it is important to specify both configurations keys. Otherwise, the from timezone might be picked from your server’s configuration. Refer to the article on user migrations for more details on how to migrate when UNIX timestamps are expected.

Date only migration

The Date module provided by core offers two storage options. You can store the date only, or you can choose to store the date and time. First, let’s consider a date only field. The source column that contains the data is src_date. An example value is '2019/12/1'. Drupal expects date only fields to store data in Y-m-d format like '2019-12-01'. No timezones are involved in migrating this field. The following snippet shows how to do the transformation.

field_ud_date/value: plugin: format_date source: src_date from_format: 'Y/m/j' to_format: 'Y-m-d'Date range migration

The Date Range module provided by Drupal core allows you to have a start and an end date in a single field. The src_date and src_date_end source columns contain the start and end date, respectively. This migration is very similar to date only fields. The difference is that you need to import an extra subfield to store the end date. The following snippet shows how to do the transformation:

field_ud_date_range/value: '@field_ud_date/value' field_ud_date_range/end_value: plugin: format_date source: src_date_end from_format: 'Y/m/j' to_format: 'Y-m-d'

The value subfield stores the start date. The source column used in the example is the same used for the field_ud_date field. Drupal uses the same format internally for date only and date range fields. Considering these two things, it is possible to reuse the field_ud_date mapping to set the start date of the field_ud_date_range field. To do it, you type the name of the previously mapped field in quotes (') and precede it with an at sign (@). Details on this syntax can be found in the blog post about the migrate process pipeline. One important detail is that when field_ud_date was mapped, the value subfield was specified: field_ud_date/value. Because of this, when reusing that mapping, you must also specify the subfield: '@field_ud_date/value'. The end_value subfield stores the end date. The mapping is similar to field_ud_date expect that the source column is src_date_end.

Note: The Date Range module does not come enabled by default. To be able to use it in the example, it is set as a dependency of demo migration module.

Datetime migration

A date and time field stores its value in Y-m-d\TH:i:s format. Note it does not include a timezone. Instead, UTC is assumed by default. In the example, the source column that contains the data is src_datetime. An example value is 2019/12/24 19:15:30. Let’s assume that all dates are provided with a timezone value of America/Managua. The following snippet shows how to do the transformation:

field_ud_datetime/value: plugin: format_date source: src_datetime from_format: 'Y/m/j H:i:s' to_format: 'Y-m-d\TH:i:s' from_timezone: 'America/Managua' to_timezone: 'UTC'

If you need the timezone to be dynamic, things get a bit harder. The from_timezone and to_timezone settings expect a literal value. It is not possible to read a source column to set these configurations. An alternative is that your source column includes timezone information like 2019/12/24 19:15:30 -07:00. In that case, you would need to tweak the from_format to include the timezone component and leave out the from_timezone configuration.

Things to consider

Date migrations can be tricky because they can be affected by things outside of the Migrate API. Here is a non-exhaustive list of things to consider:

  • For date and time fields, the transformation might be affected by your server’s timezone if you do not manually set the from_timezone configuration.
  • People might see the date and time according to the preferences in their user profile. That is, two users might see a different value for the same migrated field if their preferred timezones are not the same.
  • For date only fields, the user might see a time depending on the format used to display them. A list of available formats can be found at /admin/config/regional/date-time.
  • A field can always be configured to be presented in a specific timezone. This would override the site’s timezone and the user’s preferred timezone.

What did you learn in today’s blog post? Did you know that entity properties and date fields expect different destination formats? Did you know how to do timezone conversions? What challenges have you found when migrating dates and times? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories: FLOSS Project Planets