FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: Weekly report #132

Planet Debian - Tue, 2017-11-07 08:17

Here's what happened in the Reproducible Builds effort between Sunday October 29 and Saturday November 4 2017:

Past events
  • From October 31st — November 2nd we held the 3rd Reproducible Builds summit in Berlin, Germany. A full, in-depth report will be posted in the next week or so.
Upcoming events
  • On November 8th Jonathan Bustillos Osornio (jathan) will present at CubaConf Havana.

  • On November 17th Chris Lamb will present at Open Compliance Summit, Yokohama, Japan on how reproducible builds ensures the long-term sustainability of technology infrastructure.

Reproducible work in other projects Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

7 package reviews have been added, 43 have been updated and 47 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (44)
  • Andreas Moog (1)
  • Lucas Nussbaum (7)
  • Steve Langasek (1)
Documentation updates diffoscope development

Version 88 was uploaded to unstable by Mattia Rizzolo. It included contributions (already covered by posts of the previous weeks) from:

  • Mattia Rizzolo
    • tests/comparators/dtb: compatibility with version 1.4.5. (Closes: #880279)
  • Chris Lamb
    • comparators:
      • binwalk: improve names in output of "internal" members. #877525
      • Omit misleading "any of" prefix when only complaining about one module in ImportError messages.
    • Don't crash on malformed "md5sums" files. (Closes: #877473)
    • tests/comparators:
      • ps: ps2ascii > 9.21 now varies on timezone, so skip this test for now.
      • dtby: only parse the version number, not any "-dirty" suffix.
    • debian/watch: Use HTTPS URI.
  • Ximin Luo
    • comparators:
      • utils/file: Diff container metadata centrally. This fixes a last remaining bug in fuzzy-matching across containers. (Closes: #797759)
      • Fix all the affected comparators after the above change.
  • Holger Levsen
    • Bump Standards-Version to 4.1.1, no changes needed.
strip-nondeterminism development

Version 0.040-1 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks, as well as new ones from:

Version 0.5.2-2 was uploaded to unstable by Holger Levsen.

It included contributions already covered by posts of the previous weeks, as well as new ones from:

reprotest development buildinfo.debian.net development tests.reproducible-builds.org
  • Mattia Rizzolo:
    • archlinux: enable schroot building on pb4 as well
    • archlinux: don't install the deprecated abs tool
    • archlinux: try to re-enable one schroot creation job
  • lynxis
    • lede: replace TMPDIR -> RESULTSDIR
    • lede: openwrt_get_banner(): use locals instead of globals
    • lede: add newline to $CONFIG
    • lede: show git log -1 in jenkins log
  • Holger Levsen:
    • lede: add very simple landing page
  • Juliana Oliveira Rodrigues
    • archlinux: adds pacman-git dependencies
  • kpcyrd
    • archlinux: disable signature verification when running in the future
    • archlinux: use pacman-git until the next release
    • archlinux: make pacman fail less early
    • archlinux: use sudo to prepare chroot
    • archlinux: remove -rf for regular file
    • archlinux: avoid possible TOCTOU issue
    • archlinux: Try to fix tar extraction
    • archlinux: fix sha1sums parsing
Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Categories: FLOSS Project Planets

KDE Promo Activity Report – October 2017

Planet KDE - Tue, 2017-11-07 08:08

Another week, another KDE Promo report!

This edition of the report is special in that we’ve now synchronized the reports posted on the mailing list with the ones posted here. In other words, the posts shared here will no longer lag behind the mailing list.

If you missed the previous report, don’t worry: you can read it here.

So, what have we been up to in the past month, and which tasks do we offer to potential contributors?
Let’s have a look.

1) Qt World Summit 2017 – Post-event analysis and discussion

QtWS 2017 ended, and we are generally happy with our presence and activities at the event. However, there is always room for improvement.

That’s why we started a task on Phabricator to discuss what went wrong and what we could do better in the future.

How you can help:

  • if you were at QtWS 2017 either as a visitor or as part of our team, please write your impressions on Phabricator
  • if you have any suggestions for what we can improve – not just at QtWS, but at other events where KDE has a presence – feel free to mention them in your comments.
2) Organizing the KDE Promo Sprint

This summer we ran a short survey to see how many people were interested in contributing to KDE Promo.
If you’re one of those people, we would like to meet you and work with you in person!
And if you didn’t take the survey, but you’re serious about contributing to KDE Promo, you are more than welcome to join us.

How you can help:

3) Preparing the End-of-Year Fundraising Campaign

Winter is coming, right? That means it’s high time we started organizing the end-of-year fundraising campaign.
Our Randa 2017 fundraiser reached 30% of its goal, so it would be great if we could achieve more in this one.

How you can help:

  • join the kde-promo IRC channel and let us know that you would like to work on this
  • take a look at the KDE Community 2017 Highlight Reel and add important events we might have missed
  • write an article/announcement for the Dot about the campaign based on the highlight reel
  • read this email thread to learn how fundraising has been done before, and what lessons can be extracted from that experience.
4) KDE.org Website Redesign

Just a reminder that a major overhaul of the official KDE website is still in progress.

The website will not only be visually redesigned, but will also switch to a new CMS (namely, WordPress).

We haven’t gathered a lot of new feedback since the last report, so please share your opinions and suggestions for the new website if you have any.

5) Beyond Promo: Voting on KDE Community Goals has officially started!

As you’re probably aware, the KDE Community has been working on long-term goals for the past couple of months.

With the proposals submitted and the review/discussion period over, now is the time to vote for the goals that you find most important for the community.

The voting will be open for 2 weeks.

If you’re an active KDE contributor, you should have already received the information on how to vote.
This is your chance to influence the direction in which the KDE Community will be moving in the future, so make sure to participate.

Reasons to Celebrate

Here’s a small selection of wins, successes, and cool achievements from the last month.

1) KDE has a new patron – Private Internet Access!

We reported on this, so you can read more about the partnership.

2) Purism Librem5 campaign exceeded its funding goal!

The crowdfunding campaign has succeeded, and then some – they managed to reach $1 500 000. Congratulations!

This means our partnership will proceed to the next level as agreed.

We have supported the campaign with blog posts, Dot articles, social media posts, guerrilla marketing on Reddit, blasts to media outlets, etc. Many tech news sites have talked about our involvement as collected in this Phabricator task, and the campaign success can be traced back directly to when we got involved.

How to Join KDE Promo

Do you want to contribute to KDE, but don’t know how to code? You can help us spread the
word about our software and attract more users to our community!

If you’d like to join KDE Promo, you can:

1) subscribe to the kde-promo mailing list and send an introductory email

2) join the #kde-promo IRC channel or the Telegram group and tell us a bit about yourself

Let us know what you’re good at and what you would like to do for KDE Promo.

We’ll find tasks for you and help you get started!

3) KDE Community reached 51 000 followers on Twitter!

The @kdecommunity Twitter account has recently reached 50k followers, and then another thousand soon after, which is great. (If you still aren’t following us, go do that now!)

What isn‘t so great is that Paul still has to manually collect all analytics data from social media – and he‘s been doing that for several months now.

If you can recommend some tools that could help with automating this, please let us know in this Phabricator task.

And that’s it for today’s Promo report. We hope it was informative, and that it helped you understand what we’re doing and how you can get involved.

Please let us know if you have any suggestions for improving our reports, and as always, feel free to send us your feedback.

Thanks a lot!

This report was originally published on the kde-promo mailing list. We are sharing it on Planet KDE for those who might be interested in promo activities, but are not yet subscribed to the mailing list.

Categories: FLOSS Project Planets

KTextEditorPreviewPlugin 0.2.1 (last stand-alone)

Planet KDE - Tue, 2017-11-07 03:58

KTextEditorPreviewPlugin 0.2.1 has been released.

Example: KTextEditor Document Preview plugin used with Kompare KParts plugin in Kate

The KTextEditorPreviewPlugin software provides the KTextEditor Document Preview Plugin, a plugin for the editor Kate, the IDE KDevelop, or other software using the KTextEditor framework.

The plugin enables a live preview of the currently edited text document in the final format, in the sidebar (Kate) or as tool view (KDevelop). So when editing e.g. a Markdown text or an SVG image, the result is instantly visible next to the source text. For the display the plugin uses that KParts plugin which is currently selected as the preferred one for the MIME type of the document. If there is no KParts plugin for that type, no preview is possible.

Download from:
https://download.kde.org/stable/ktexteditorpreviewplugin/0.2.1/src

sha256:
f5fd393f15fb04a49b22b16b136b1f614f4d5502a9a9a0444d83a74eff3a1e19 ktexteditorpreviewplugin-0.2.1.tar.xz

Signed with my new PGP key
E191 FD5B E6F4 6870 F09E 82B2 024E 7FB4 3D01 5474
Friedrich W. H. Kossebau
ktexteditorpreviewplugin-0.2.1.tar.xz.sig

Change since 0.2.0
  • Translations improved and added for new languages (es, pl, ru)
Notes

This is the last stand-alone release of this plugin. It has been merged post-release as planned into the Kate repository, so will see its future releases as part of Kate, with the first being the ones as part of KDE Applications 17.12.

Developers: Improve your favourite KParts plugin

While a usual KParts plugin works out of the box, for a perfect experience with the Automatic Updating option some further improvements might be needed:

A few KParts plugins have already seen such adaptions, like the SVGPart and the KUIViewerPart (see also blog post), adaptions to be released with KDE Applications 17.12.
Another KParts plugin has been written with that in mind from the start, the KMarkdownWebViewPart (see also blog post), which already has been released.

You might want to take some guidance by the respective commit “Support loading by stream and restoring state on reload” to the SVGPart repository.


Categories: FLOSS Project Planets

Weekly Python Chat: Testing Python with pytest

Planet Python - Mon, 2017-11-06 22:30

Special guest Brian Okken is joining us this week for an introduction to pytest followed by a Q&A session during which we'll answer your questions about pytest and testing in Python.

Categories: FLOSS Project Planets

Don Armstrong: Autorandr: automatically adjust screen layout

Planet Debian - Mon, 2017-11-06 22:05

Like many laptop users, I often plug my laptop into different monitor setups (multiple monitors at my desk, projector when presenting, etc.) Running xrandr commands or clicking through interfaces gets tedious, and writing scripts isn't much better.

Recently, I ran across autorandr, which detects attached monitors using EDID (and other settings), saves xrandr configurations, and restores them. It can also run arbitrary scripts when a particular configuration is loaded. I've packed it, and it is currently waiting in NEW. If you can't wait, the deb is here and the git repo is here.

To use it, simply install the package, and create your initial configuration (in my case, undocked):

autorandr --save undocked

then, dock your laptop (or plug in your external monitor(s)), change the configuration using xrandr (or whatever you use), and save your new configuration (in my case, workstation):

autorandr --save workstation

repeat for any additional configurations you have (or as you find new configurations).

Autorandr has udev, systemd, and pm-utils hooks, and autorandr --change should be run any time that new displays appear. You can also run autorandr --change or autorandr --load workstation manually too if you need to. You can also add your own ~/.config/autorandr/$PROFILE/postswitch script to run after a configuration is loaded. Since I run i3, my workstation configuration looks like this:

#!/bin/bash xrandr --dpi 92 xrandr --output DP2-2 --primary i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;' i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;' i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;'

which fixes the dpi appropriately, sets the primary screen (possibly not needed?), and moves the i3 workspaces about. You can also arrange for configurations to never be run by adding a block hook in the profile directory.

Check it out if you change your monitor configuration regularly!

Categories: FLOSS Project Planets

Fabio Zadrozny: PyDev 6.1.0: dealing with blank lines in code formatter

Planet Python - Mon, 2017-11-06 21:37
PyDev 6.1.0 is now available for download. The major change in this release is in the code formatter, which can now deal with adding or removing blank lines so that code can properly conform to pep-8, besides having a number bugs fixed (see http:///www.pydev.org for more details).
Now, why use the PyDev code formatter at all when there are so many other options available? (i.e.: autopep8, yapf, PythonTidy -- and autopep8 is even already included by default in PyDev) 
Well, the PyDev code formatter is unique in that it tries to do as few changes as possible to the code, so, it tries to conform to the coding format that the programmer uses, just fixing few (usually obvious) issues, such as spaces after comma, spaces in comments, operators or right-trimming lines, with an option to actually only fix only the lines actually changed.
-- for actually changing the indentation of statements or comments, PyDev has options which can be manually activated, such as wrap or unwrap statement -- through Ctrl+1, wrap statement or Ctrl+1, unwrap statement in the line which has the contents to be wrapper or unwrapped or Ctrl+2, w to wrap comments -- see: http://pydev.blogspot.com.br/2015/04/wrapping-docstringscomments-in-pydev.html.
Also, the PyDev code formatter is pretty fast, so, I don't have issues in letting the option to autoformat on save turned on (speed is the main reason why I added such a feature to the PyDev code formatter instead of going with autopep8 or integrating another code formatting tool).
So, that's it, enjoy!
p.s.: Thank you to all PyDev supporters -- https://www.brainwy.com/supporters/PyDev/ -- which enable PyDev to keep on being improved!
p.s.: LiClipse 4.3.1 already bundles PyDev 6.1.0, see: http://www.liclipse.com/download.html for download links.
Categories: FLOSS Project Planets

OpenStack Summit Sydney - librmb presentation

Planet KDE - Mon, 2017-11-06 21:10
Presented today about the librmb project at the OpenStack Summit in Sydney. You can find the slides from the Lightning Talk on github and a video record of the session is available here.
Categories: FLOSS Project Planets

Rogério Brito: Some activities of the day

Planet Debian - Mon, 2017-11-06 19:52

Yesterday, I printed the first draft of the first chapter when my little boy was here and he was impressed with this strange object called a "printer". Before I printed what I needed, I fired up LibreOffice and chose the biggest font size that was available and let him type his first name by himself. He was quicker than I thought with a keyboard. After seeing me print his first name, he was jumping up and down with joy of having created something and even showed grandma and grandpa what he had done.

He, then, wanted more and I taught him how to use that backspace key, what it meant and he wanted to type his full name. I let him and taught him that there is a key called space that he should type every time he wants to start a new word and, in the end, he typed his first two names. To my surprise, he memorized the icon with the printer (which I must say that I have to hunt every time, since it seems so similar to the adjacent ones!) and pressed this new key called "Enter". When he pressed, he wasn't expecting the printer on his right to start making noises and printing his name.

He was so excited and it was so nice to see his reaction full of joy to get a job done!

I am thinking of getting a spare computer, building it with him and for him, so that he can call it his computer every time he comes to see daddy. As a serendipitous situation, Packt Publishing offered yesterday their title "Python Projects for Kids". Unfortunately, he does not yet know how to read, but I guess that the right age is coming soon, which is a good thing to make him be educated "the right way" (that is, with the best support, teaching and patience that I can give him).

Anyway, I printed the first draft of the first chapter and today I have to turn it in.

As I write this, I am downloading a virtual machine from Microsoft to try to install Java on it. Let me see if it works. I have none of the virtualization options used, tough the closest seems to be virtualbox.

Let me cross my fingers.

In other news, I updated some of the tags of very old posts of this blog, and I am seriously thinking about switching from [ikiwiki][0] to another blog platform. It is slow, very slow on my system with the repositories that I have, especially on my armel system. Some non-interpreted system would be best, but I don't know if such a thing even exists. But the killer problem is that it doesn't support easily the typing of Mathematics (even though a 3rd party plugin for MathJax exists).

On the other hand, I just received an answer on twitter from @telegram and it was nice:

Hello, Telegram supports bold and italic. You can type **bold** and __italic__. On mobile, you can also highlight text for this as well.

It is nice that this works with telegram-desktop too.

Besides that, I filed some bugs on Debian's BTS, responded to some issues on my projects on GitHub (I'm slowly getting back on maintaining things) and file wishlist bugs on some other projects.

Oh, and I grabbed a copy of "Wonder woman" ("Mulher Maravilha") and "Despicable Me 3" ("Meu Malvado Favorito 3") dubbed in Brazilian Portuguese for my son. I have to convert the audio from AAC-LC in 6 channels to AC3 or to stereo. Otherwise, my TVs have problem with the videos (one refuses to play the entire file and another plays the audio with sounds of hiccups).

Edit: After converting the VirtualBox image taken from Microsoft, I could easily use qemu/kvm to create screenshots of the installation of Java. The command that I used (for future reference) is:

qemu-system-x86_64 -enable-kvm -m 4096 -smp 2 -net nic,model=e1000 -net user -soundhw ac97 -drive index=0,media=disk,cache=unsafe,file=win7.qcow2

Edit: Fixed some typos.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppQuantuccia 0.0.2

Planet Debian - Mon, 2017-11-06 19:33

A first maintenance release of RcppQuantuccia got to CRAN earlier today.

RcppQuantuccia brings the Quantuccia header-only subset / variant of QuantLib to R. At present it mostly offers calendaring, but Quantuccia just got a decent amount of new functions so hopefully we can offer more here too.

This release was motivated by the upcoming Rcpp release which will deprecate the okd Date and Datetime vectors in favours of newer ones. So this release of RcppQuantuccia switches to the newer ones.

Other changes are below:

Changes in version 0.0.2 (2017-11-06)
  • Added calendars for Canada, China, Germany, Japan and United Kingdom.

  • Added bespoke and joint calendars.

  • Using new date(time) vectors (#6).

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-11-06

Planet Apache - Mon, 2017-11-06 18:58
  • How to effectively complain to an Irish broadcaster about a public affairs show

    Simon McGarr: “If you think that a public affairs show has failed to address a matter with proper balance, you can (Tweet) say it to the breeze or complain. There is a process to follow to make an effective complaint 1) complain to broadcaster 2) complain to BAI if unhappy with response.” Thread with more details, and yet more at https://twitter.com/IrishTV_films/status/927172642544783360

    (tags: complaining complaints rte bai ireland current-affairs)

  • The 10 Top Recommendations for the AI Field in 2017 from the AI Now Institute

    I am 100% behind this. There’s so much potential for hidden bias and unethical discrimination in careless AI/ML deployment.

    While AI holds significant promise, we’re seeing significant challenges in the rapid push to integrate these systems into high stakes domains. In criminal justice, a team at Propublica, and multiple academics since, have investigated how an algorithm used by courts and law enforcement to predict recidivism in criminal defendants may be introducing significant bias against African Americans. In a healthcare setting, a study at the University of Pittsburgh Medical Center observed that an AI system used to triage pneumonia patients was missing a major risk factor for severe complications. In the education field, teachers in Texas successfully sued their school district for evaluating them based on a ‘black box’ algorithm, which was exposed to be deeply flawed. This handful of examples is just the start?—?there’s much more we do not yet know. Part of the challenge is that the industry currently lacks standardized methods for testing and auditing AI systems to ensure they are safe and not amplifying bias. Yet early-stage AI systems are being introduced simultaneously across multiple areas, including healthcare, finance, law, education, and the workplace. These systems are increasingly being used to predict everything from our taste in music, to our likelihood of experiencing mental illness, to our fitness for a job or a loan.

    (tags: ai algorithms machine-learning ai-now ethics bias racism discrimination)

  • Something is wrong on the internet – James Bridle – Medium

    ‘an essay on YouTube, children’s videos, automation, abuse, and violence, which crystallises a lot of my current feelings about the internet through a particularly unpleasant example from it. […] What we’re talking about is very young children [..] being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.’

    (tags: internet youtube children web automation violence horror 4chan james-bridle)

Categories: FLOSS Project Planets

Vladimir Iakolev: Soundlights with ESP8266 and NeoPixel Strip

Planet Python - Mon, 2017-11-06 18:40

About a year ago I made soundlights with Raspberry Pi. But RPI is a bit of an overkill for this simple task and it’s quite big, doesn’t have WiFi out of the box and practically can’t be used without a power adapter.

So I decided to port soundlights to ESP8266. The main idea was to reuse as much as possible from the previous implementation, so the parts with patched audio visualizer and colors generation are the same. In a few words, I’ve patched cava to print numbers instead of showing pretty bars in a terminal. And I’ve generated colors with a code found on Quora.

And in current implementation I decided to make it very simple to use, the only requirement is to have a machine with cava and ESP8266 on the same WiFi network. So I chose UDP broadcasting as a way to send data to ESP8266. And because there’s just 60 LEDs and color of a LED is three values from 0 to 255, colors for all strip is just 180 bytes. So it fits in one UDP packet.

Let’s start with the part with cava:

import sys import socket import array COLORS_COUNT = 256 COLORS_OFFSET = 50 def get_spaced_colors(n): max_value = 16581375 interval = int(max_value / n) colors = [hex(i)[2:].zfill(6) for i in range(0, max_value, interval)] return [(int(color[:2], 16), int(color[2:4], 16), int(color[4:6], 16)) for color in colors] def send(colors): line = array.array('B', colors).tostring() sock.sendto(line, ('255.255.255.255', 42424)) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1) colors = get_spaced_colors(COLORS_COUNT) while True: try: nums = map(int, sys.stdin.readline()[:-1].split()) led_colors = [c for num in nums for c in colors[num]] send(led_colors) except Exception as e: print(e)

It can be used like:

unbuffer ./cava -p soundlights/cava_config | python cava/soundlights/esp/client.py

And it just reads numbers from cava output, generates colors, transforms them to bytes and broadcasts them at 42424 port.

The ESP8266 part is even simpler:

import socket import machine import neopixel np = neopixel.NeoPixel(machine.Pin(5), 60) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.bind(('', 42424)) while True: line, _ = sock.recvfrom(180) if len(line) < 180: continue for i in range(60): np[i] = (line[i * 3], line[i * 3 + 1], line[i * 3 + 2]) np.write()

It just receives broadcasts from 42424 port and changes colors of LEDs.

At the end, this version has less code and just works. It’s even some sort of IoT and with some effort can become a wearable device.

Github.

Categories: FLOSS Project Planets

Steve Loughran: I do not fear Kerberos, but I do fear Apple Itunes billing

Planet Apache - Mon, 2017-11-06 18:38
I laugh at Kerberos messages. When I see a stack trace with a meaningless network error I go "that's interesting". I even learned PowerShell in a morning to fix where I'd managed to break our Windows build and tests.

But there is now one piece of software I do not ever want to approach, ever again. Apple icloud billing.

So far, since Saturday's warnings on my phone telling me that there was a billing problem
  1. Tried and repeatedly failed to update my card details
  2. Had my VISA card seemingly blocked by my bank,
  3. Been locked out of our Netflix subscription on account of them failing to bill a card which has been locked out by my may
  4. Had a chat line with someone on Apple online, who finally told me to phone an 800 number.
  5. Who are closed until office hours tomorrow
What am I trying to do? Set up iCloud family storage so I get a full resolution copy of my pics shared across devices, also give the other two members of our household lots of storage.

What have I achieved? Apart from a card lockout and loss of Netflix, nothing.

If this was a work problem I'd be loading debug level log files oftens of GB in editors, using regexps to delete all lines of noise, then trying to work backwards from the first stack trace in one process to where something in another system went awry. Not here though here I'm thinking "I don't need this". So if I don't get this sorted out by the end of the week, I won't be. I will have been defeated.

Last month I opted to pay £7/month for 2TB of iCloud storage. This not only looked great value for 2TB of storage, the fact I could share it with the rest of the family meant that we got a very good deal for all that data. And, with integration with iphotos, I could use to upload all my full resolution pictures. So sign up I did

My card is actually bonded to Bina's account, but here I set up the storage, had to reenter it. Where the fact that the dropdown menu switched to finnish was most amusing


With hindsight I should have taken "billing setup page cannot maintain consistency of locales between UI, known region of user, and menus" as a warning sign that something was broken.

Other than that, everything seemed to work. Photo upload working well. I don't yet keep my full photoset managed by iPhotos; it's long been a partitionedBy(year, month) directory tree built up with the now unmaintained Picasa, backed up at full res to our home server, at lower res to google photos. The iCloud experience seemed to be going smoothly; smoothly enough to think about the logistics of a full photo import. One factor there iCloud photos downloader works great as a way of downloading the full res images into the year/month layout, so I can pull images over to the server, so giving me backup and exit strategies.

That was on the Friday. On the Saturday a little alert pops up on the phone, matched by an email


Something has gone wrong. Well, no problem, over to billing. First, the phone UI. A couple of attempts and no, no joy. Over to the web page

Ths time, the menus are in german


"Something didn't work but we don't know what". Nice. Again? Same message.

Never mind, I recognise "PayPal" in german, lets try that:

No: failure.

Next attempt: use my Visa credit card, not the bank debit card I normally use. This *appears* to take. At least, I haven't got any more emails, and the photos haven't been deleted. All well to the limits of my observability.

Except, guess what ends up in my inbox instead? Netflix complaining about billing

Hypothesis: repeated failures of apple billing to set things up have caused the bank to lock down the card, it just so happens that Netflix bill the same day (does everyone do the first few days of each month?), and so: blocked off. That is, Apple Billing's issues are sufficient to break Netflix.

Over to the bank, review transactions, drop them a note.

My bank is fairly secure and uses 2FA with a chip-and-pin card inserted into a portable card reader. You can log in without it, but then cannot set up transfers to any new destination. I normally use the card reader and card. Not today though, signatures aren't being accepted. Solution, fall back to the "secrets" and then compose a message

Except of course, the first time I try that, it fails


This is not a good day. Why can't I just have "Unknown failure at GSS API level". That I can handle. Instead what I am seeing here is a cross-service outage choreographed by Apple, which, if it really does take away my photos, will even go into my devices.

Solution: log out, log in. Compose the message in a text editor for ease of resubmission. Paste and submit. Off it goes.

Sunday: don't go near a computer. Phone still got a red marker "billing issues", though I can't distinguish from "old billing issues" from new billing issues. That is: no email to say things are fixed. At the same time, no emails to say "things are still broken". Same from netflix, neither a success message, or a failure one. Nothing from the bank either.

Monday: not worrying about this while working. No Kerberos errors there either. Today is a good day, apart from the thermostat on the ground floor not sending "turn the heating" on messages to the boiler, even after swapping the batteries.

After dinner, netflix. Except the TV has been logged out. Log in to netflix on the web and yes, my card is still not valid. Go to the bank, no response there yet. Go back to netflix, insert Visa credit card: its happy. This is good, as if this card started failing too, I'd be running out of functional payment mechanisms.

Now, what about apple?

No, not english, or indeed, any language I know how to read. What now?

Apple support, in the form of a chat

After a couple of minutes wait I as talking to someone. I was a bit worried that the person I'm talking to was "allen". I know Allen. Sometimes he's helpful. Let's see.

After explaining my problem and sharing my appleId, Allen had a solution immediately: only the nominated owner of the family account can do the payment, even if the icloud storage account is in the name of another. So log in as them and try and sort stuff out there.

So: log out as me, long in as B., edit the billing. Which is the same card I've been using. Somehow, things went so wrong with Amazon billing trying to charge the system off my user ID and failing that I've been blocked everywhere. Solution: over to the VISA credit card. All "seems" well.

But how can I be sure? I've not got any emails from Apple Billing. The little alert in the settings window is gone, but I don't trust it. Without notification from Apple confirming that all is well, I have to assume that things are utterly broken. How can I trust a billing system which has managed to lock me out of my banking or netflix?

I raised this topic with Allen. After a bit of backwards and forwards, he gave me an 800 number to call. Which I did. They are closed after 19:00 hours, so I'll have to wait until tomorrow. I shall be calling them. I shall also be in touch with my bank.

Overall: this has been, so far, an utter disaster. Its not just that the system suffers from broken details (prompts in random languages), and deeply broken back ends (whose card is charged), but it manages to escalate the problem to transitively block out other parts of my online life.

If everything works tomorrow, I'll treat this as a transient disaster. If, on the other hand, things are not working tomorrow, I'm going to give up trying to maintain an iCloud storage account. I'll come up with some other solution. I just can't face having the billing system destroy the rest of my life.
Categories: FLOSS Project Planets

PreviousNext: Composing Docker Local Development: Networking

Planet Drupal - Mon, 2017-11-06 16:47
Share:

Its extremely important to have default values that you can rely on for local Drupal development, one of those is "localhost". In this blog post we will explore what is required to make our local development environment appear as "localhost".

by Nick Schuch / 7 November 2017

In our journey migrating to Docker for local dev we found ourselves running into issues with "discovery" of services eg. Solr/Mysql/Memcache.

In our first iteration we used linking, allowing our services to talk to each other, some downsides to this were:

  • Tricky to compose an advanced relationship, lets use PHP and PanthomJS as an example:
    • PHP needs to know where PhantomJS is running
    • PhantomJS needs to know the domain of the site that you are running locally
    • Wouldn't it be great if we could just use "localhost" for both of these configurations?
  • DNS entries only available within the containers themselves, cannot run utilities outside of the containers eg. Mysql admin tool

With this in mind, we hatched an idea.....

What if we could just use "localhost" for all interactions between all the containers.

  • If we wanted to access our local projects Apache, http://localhost (inside and outside of container)
  • If we wanted to access our local projects Mailhog, http://localhost:8025 (inside and outside of container)
  • If we wanted to access our local projects Solr, http://localhost:8983 (inside and outside of container)

All this can be achieved with Linux Network Namespaces in Docker Compose.

Network Namespaces

Linux Network Namespaces allow for us to isolate processes into their own "network stacks".

By default, the following happens when a container gets created in Docker:

  • Its own Network Namespace is created
  • A new network interface is added
  • Provided an IP on the default bridge network

However, if a container is created and told to share the same Network Namespace with an existing container, they will both be able to interface with each other on "localhost" or "127.0.0.1".

Here are working examples for both OSX and Linux.

OSX

  • Mysql and Mail share the PHP containers Network Namespace, giving us "localhost" for "container to container" communication.
  • Port mapping for host to container "localhost"
version: "3" services: php: image: previousnext/php:7.1-dev # You will notice that we are forwarding port which do not belong to PHP. # We have to declare them here because these "sidecar" services are sharing # THIS containers network stack. ports: - "80:80" - "3306:3306" - "8025:8025" volumes: - .:/data:cached db: image: mariadb network_mode: service:php mail: image: mailhog/mailhog network_mode: service:php

Linux

All containers share the Network Namespace of the users' host, nothing else is required.

version: "3" services: php: image: previousnext/php:7.1-dev # This makes the container run on the same network stack as your # workstation. Meaning that you can interact on "localhost". network_mode: host volumes: - .:/data db: image: mariadb network_mode: host mail: image: mailhog/mailhog network_mode: host Trade offs

To facilitate this approach we had to make some trade offs:

  • We only run 1 project at a time. Only a single process can bind to port 80, 8983 etc.
  • Split out the Docker Compose files into 2 separate files, making it simple for each OS can have its own approach.
Bash aliases

Since we split out our Docker Compose file to be "per OS" we wanted to make it simple for developers to use these files.

After a couple of internal developers meetings, we came up with some bash aliases that developers only have to setup once.

# If you are on a Mac. alias dc='docker-compose -f docker-compose.osx.yml' # If you are running Linux. alias dc='docker-compose -f docker-compose.linux.yml'

A developer can then run all the usual Docker Compose commands with the shorthand dc command eg.

dc up -d

This also keeps the command docker-compose available if a developer is using an external project.

Simple configuration

The following solution has also provided us with a consistent configuration fallback for local development.

We leverage this in multiple places in our settings.php, here is 1 example:

$databases['default']['default']['host'] = getenv("DB_HOST") ?: '127.0.0.1';
  • Dev / Stg / Prod environments set the DB_HOST environment variable
  • Local is always the fallback (127.0.0.1)
Conclusion

While the solution may have required a deeper knowledge of the Linux Kernel, it has yielded us a much simpler solution for developers.

How have you managed Docker local dev networking? Let me know in the comments below.

Tagged Docker, Drupal Development

Posted by Nick Schuch
Sys Ops Lead

Dated 7 November 2017

Add new comment
Categories: FLOSS Project Planets

Hook 42: Hook 42 at New England Drupal Camp

Planet Drupal - Mon, 2017-11-06 15:21

We're super excited to attend New England Drupal Camp this year!

Aimee is honored to have been invited to be the keynote speaker this year. She'll be discussing inclusion and diversity in the community. In addition to Aimee's keynote, we are partnering up with our longtime friends at Lingotek to put together a hands-on multilingual workshop that covers Drupal 8 and an integration to Lingotek's Translation Management System.

Just in case that wasn't enough, we're also presenting a couple of sessions. One comparing the madness of the multilingual modules on Drupal 7 to the new and improved Drupal 8 multilingual approach. We will be presenting another session covering how ANYONE and EVERYONE can help contribute back to the Drupal project even if they aren't the most advance technical person

Categories: FLOSS Project Planets

4.0 Development Update

Planet KDE - Mon, 2017-11-06 14:51

And then we realized we hadn’t posted news about ongoing Krita development for some time now. The main reason is that we’ve, well, been really busy doing development. The other reason is that we’re stuck making fully-featured preview builds on OSX and Linux. More about that later…

So, what’s been going on? Some of the things we’ve been doing were backported to Krita 3.2 and 3.3, like support for the Windows 8 Pointer API, support for the ANGLE Direct3D display renderer, the new gmic-qt G’Mic plugin, new commandline options, support for touch painting, the new smart patch tool, new brush presets and blending modes… But there is also a lot of other work that simply couldn’t be backported to 3.x.

The last time we did a development update with Krita 4.0 was in June 2017: the first development build for 4.0 already had a large number of new features:

  • the SVG-based vector layers with improved vector handling tools,
  • Allan Marshall’s new airbrush system,
  • Eugene Ingerman’s healing brush,
  • a new export system that reports which parts of your image cannot be saved to your chosen file format: and that is now improved: saving now happens in the background. You can press save, and continue painting. Autosave also doesn’t interrupt your painting anymore.
  • Wolthera’s new and improved palette docker
  • A new docker for loading SVG symbol collections, Which now comes with a new symbol libary with brush preset icons. Perfect with the new brush editor.
  • We added Python scripting (only available in the Windows builds: we need platform maintainers). Eliakin and Wolthera have spent the summer adding great new python-based plugins, extending and improving the scripting API while working:
    • Ten brushes: a script to assign ten favorite brushes to hotkeys
    • Quick settings docker: with brush size, opacity and flow
    • Comic Projects Management tools
    • And much, much more
What has been recently added to Krita 4.0 Big performance improvements

After the development build release we sent out a user survey:  In case you didn’t see the results of our last survey this was the summary.

The biggest item on the list was lag. Lag can have many meanings, and there will always be brushes or operations that are not instant. But we had the opportunity the past couple of months to work on an outside project to help improve the performance of Krita. While we knew this might delay the release of Krita 4.0, it would be much appreciated by artists. Some of the performance improvements contain the following:

  • multi-threaded performance using all your CPUs for pixel brush engines (80% of all the brushes that are made).
  • A lot of speed optimizations with dab grouping for all brushes
  • more caching to speed up brush rendering across all brushes.

Here’s a video of Wolthera using the multithreaded brushes:

Performance Benchmarking

We also added performance benchmarking. We can see much more accurately how brushes are performing and make our brushes better/optimized in the future:

 

 

Pixel Grid

Andrey Kamakin added an option to show a thin grid around pixels if you zoom in enough:

Live Brush Preview

Scott Petrovic has been working with a number of artists to rework the brush editor. There have been many things changed including renaming brushes and  better saving options.  There’s also a live stroke preview now to see what happens when you change settings. Parts of the editor can be shown or hidden to accommodate for smaller monitors.

Isometric Grid

The grid now has a new Isometric option. This can be controlled and modified through the grid docker:

Filters
  • A new edge detection filter
  • Height to normal map filter
  • Improved gradient map filter
  • A new ASC-CDL color balance filter with slope, offset and power parameters
Layers
  • File layers now can have the location of their reference changed.
  • A convert layer to file layer option has been added that saves out layers and replaces them with a file layer referencing them.
Dockers
  • A new docker for use on touch screens: big buttons in a layout that resembles the button row of a Wacom tablet.
More

And there’s of course a lot of bug fixes, UI polish, performance improvements, small feature improvements. The list is too long to keep here, so we’re working on a separate release notes page. These notes, like this Krita 4.0 build, are very much a work in progress!

Features currently working on

There are still a number of features we want to have done before we release Krita 4.0:

  • a new text tool (we have started the ground work for this, but it still needs a lot more work)
  • a faster colorize mask tool (we need to make this much faster as it is currently too slow)
  • stacked brushes where you can have multiple brush tips similar to other applications.

And then there are no doubt things missing from the big new features, like SVG vector layers and Python scripting that need to be implemented, there will be bugs that need to be fixed. We’ve made packages for you to download and test, but be warned, there are bugs. And:

This is pre-alpha code. It will crash. It will do weird things. It might even destroy your images on saving!

AND: DOCUMENTS WITH VECTOR LAYERS SAVED IN KRITA 4.0 CANNOT BE EDITED IN KRITA 3.x!

You can have both Krita 3 and Krita 4 on the same system. They will use the same configuration (for now, that might change), which means that either Krita 3 or Krita 4 can get confused. They will use the same resources folder, so brush presets and so on are shared.

Downloads

Right now, all releases and builds, except for the Lime PPA, are created by the project maintainer, Boudewijn Rempt. This is not sustainable! Only for the Windows build, a third person is helping out by maintaining the scripts needed to build and package Krita. We really do need people to step up and help maintain the Linux and macOS/OSX builds. This means that:

  • The Linux AppImage is missing Python scripting and sound playback. It may be missing the QML-based touch docker. We haven’t managed to figure out how to add those features to the appimage! The appimage build script is also seriously outdated, and Boudewijn doesn’t have time to improve it, next to all the other things that need to be done and managed and, especially, coded. We need a platform maintainer for Linux!
  • The OSX/macOS DMG is missing Python scripting as well as PDF import and G’Mic integration. Boudewijn simply does not have the in-depth knowledge of OSX/macOS needed to figure out how to add that properly to the OSX/macOS build and packages. Development on OSX is picking up, thanks to Bernhard Liebl, but we need a platform maintainer for macOS/OSX!
Windows Download

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

There are no 32 bits Windows builds yet. There is no installer.

Linux Download

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX Download

Note: the gmic-qt and pdf plugins are not available on OSX.

Source code md5sums

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Categories: FLOSS Project Planets

Wim Leers: Rendering & caching: a journey through the layers

Planet Drupal - Mon, 2017-11-06 13:11

The Drupal render pipeline and its caching capabilities have been the subject of quite a few talks of mine and of multiple writings. But all of those were very technical, very precise.

Over the past year and a half I’d heard multiple times there was a need for a more pragmatic talk, where only high-level principles are explained, and it is demonstrated how to step through the various layers with a debugger. So I set out to do just that.

I figured it made sense to spend 10–15 minutes explaining (using a hand-drawn diagram that I spent a lot of time tweaking) and spend the rest of the time stepping through things live. Yes, this was frightening. Yes, there were last-minute problems (my IDE suddenly didn’t allow font size scaling …), but it seems overall people were very satisfied :)

Have you seen and heard of Render API (with its render caching, lazy builders and render pipeline), Cache API (and its cache tags & contexts), Dynamic Page Cache, Page Cache and BigPipe? Have you cursed them, wondered about them, been confused by them?

I will show you three typical use cases:

  1. An uncacheable block
  2. A personalized block
  3. A cacheable block that you can see if you have a certain permission and that should update whenever some entity is updated

… and for each, will take you on the journey through the various layers: from rendering to render caching, on to Dynamic Page Cache and eventually Page Cache … or BigPipe.

Coming out of this session, you should have a concrete understanding of how these various layers cooperate, how you as a Drupal developer can use them to your advantage, and how you can test that it’s behaving correctly.

I’m a maintainer of Dynamic Page Cache and BigPipe, and an effective co-maintainer of Render API, Cache API and Page Cache.

Preview:

Slides: Slides with transcriptVideo: YouTubeConference: Drupalcon ViennaLocation: Vienna, AustriaDate: Sep 28 2017 - 14:15Duration: 60 minutesExtra information: 

See https://events.drupal.org/vienna2017/sessions/rendering-caching-journey-through-layers.

Attendees: 200

Evalutations: 4.6/5

Thanks for the explanation. Your sketches about the rendering process and how dynamic cache, page cache and big pipe work together ; are awesome. It is very clear no for me.


Best session for me on DC. Good examples, loved the live demo, these live demo’s are much more helpful to me as a developer then static slides. General comments, not related to the speaker. The venue was to small for this talk and should have been on a larger stage. Also the location next to the exhibition stands made it a bit noisy when sitting in the back.


Great presentation! I really liked the hand-drawn figure and live demo, they made it really easy to understand and follow. The speaking was calm but engaging. It was great that you were so flexible on the audience feedback.
Categories: FLOSS Project Planets

Stack Abuse: Python Linked Lists

Planet Python - Mon, 2017-11-06 11:07

A linked list is one of the most common data structures used in computer science. It is also one of the simplest ones too, and is as well as fundamental to higher level structures like stacks, circular buffers, and queues.

Generally speaking, a list is a collection of single data elements that are connected via references. C programmers know this as pointers. For example, a data element can consist of address data, geographical data, geometric data, routing information, or transaction details. Usually, each element of the linked list has the same data type that is specific to the list.

A single list element is called a node. The nodes are not like arrays which are stored sequentially in memory. Instead, it is likely to find them at different memory segments, which you can find by following the pointers from one node to the next. It is common to mark the end of the list with a NIL element, represented by the Python equivalent None.

Figure 1: Single-linked list

There exist two kinds of lists - single and double-linked lists. A node in a single-linked list only points to the next element in the list, whereas a node in a double-linked list points to the previous node, too. The data structure occupies more space because you will need an additional variable to store the further reference.

Figure 2: Double-linked list

A single-linked list can be traversed from head to tail whereas traversing backwards is not as easy as that. In contrast, a double-linked list allows traversing the nodes in both directions at the same cost, no matter which node you start with. Also, adding and deleting of nodes as well as splitting single-linked lists is done in not more than two steps. In a double-linked list four pointers have to be changed.

The Python language does not contain a pre-defined datatype for linked lists. To cope with this situation we either have to create our own data type, or have to make use of additional Python modules that provide an implementation of such a data type.

In this article we'll go through the steps to create our own linked list data structure. First we create a corresponding data structure for the node. Second, you will learn how to implement and use both a single-linked list, and finally a double-linked list.

Step 1: Node as a Data Structure

To have a data structure we can work with, we define a node. A node is implemented as a class named ListNode. The class contains the definition to create an object instance, in this case, with two variables - data to keep the node value, and next to store the reference to the next node in the list. Furthermore, a node has the following methods and properties:

  • __init_(): initialize the node with the data
  • self.data: the value stored in the node
  • self.next: the reference pointer to the next node
  • has_value(): compare a value with the node value

These methods ensure that we can initialize a node properly with our data (__init__()), and cover both the data extraction and storage (via the self.data property) as well getting the reference to the connected node (via the self.next property). The method has_value() allows us to compare the node value with the value of a different node.

Listing 1: The ListNode class

class ListNode: def __init__(self, data): "constructor to initiate this object" # store data self.data = data # store reference (next item) self.next = None return def has_value(self, value): "method to compare the value with the node data" if self.data == value: return True else: return False

Creating a node is as simple as that, and instantiates an object of class ListNode:

Listing 2: Instantiation of nodes

node1 = ListNode(15) node2 = ListNode(8.2) node3 = ListNode("Berlin")

Having done that we have available three instances of the ListNode class. These instances represent three independent nodes that contain the values 15 (integer), 8.2 (float), and "Berlin" (string).

Step 2: Creating a Class for a Single-Linked List

As the second step we define a class named SingleLinkedList that covers the methods needed to manage our list nodes. It contains these methods:

  • __init__(): initiate an object
  • list_length(): return the number of nodes
  • output_list(): outputs the node values
  • add_list_item(): add a node at the end of the list
  • unordered_search(): search the list for the nodes with a specified value
  • remove_list_item_by_id(): remove the node according to its id

We will go through each of these methods step by step.

The __init__() method defines two internal class variables named head and tail. They represent the beginning and the end nodes of the list. Initially, both head and tail have the value None as long as the list is empty.

Listing 3: The SingleLinkedList class (part one)

class SingleLinkedList: def __init__(self): "constructor to initiate this object" self.head = None self.tail = None return Step 3: Adding Nodes

Adding items to the list is done via add_list_item(). This method requires a node as an additional parameter. To make sure it is a proper node (an instance of class ListNode) the parameter is first verified using the built in Python function isinstance(). If successful, the node will be added at the end of the list. If item is not a ListNode, then one is created.

In case the list is (still) empty the new node becomes the head of the list. If a node is already in the list, then the value of tail is adjusted accordingly.

Listing 4: The SingleLinkedList class (part two)

def add_list_item(self, item): "add an item at the end of the list" if not isinstance(item, ListNode): item = ListNode(item) if self.head is None: self.head = item else: self.tail.next = item self.tail = item return

The list_length() method counts the nodes, and returns the length of the list. To get from one node to the next in the list the node property self.next comes into play, and returns the link to the next node. Counting the nodes is done in a while loop as long as we do not reach the end of the list, which is represented by a None link to the next node.

Listing 5: The SingleLinkedList class (part three)

def list_length(self): "returns the number of list items" count = 0 current_node = self.head while current_node is not None: # increase counter by one count = count + 1 # jump to the linked node current_node = current_node.next return count

The method output_list() outputs the node values using the node property data. Again, to get from one node to the next the link is used that is provided via next property.

Listing 6: The SingleLinkedList class (part four)

def output_list(self): "outputs the list (the value of the node, actually)" current_node = self.head while current_node is not None: print(current_node.data) # jump to the linked node current_node = current_node.next return

Based on the class SingleLinkedList we can create a proper list named track, and play with its methods as already described above in Listings 3-6. Therefore, we create four list nodes, evaluate them in a for loop and output the list content. Listing 7 shows you how to program that, and Listing 8 shows the output.

Listing 7: Creation of nodes and list output

# create four single nodes node1 = ListNode(15) node2 = ListNode(8.2) item3 = "Berlin" node4 = ListNode(15) track = SingleLinkedList() print("track length: %i" % track.list_length()) for current_item in [node1, node2, item3, node4]: track.add_list_item(current_item) print("track length: %i" % track.list_length()) track.output_list()

The output is as follows, and shows how the list grows:

Listing 8: Adding nodes to the list

$ python3 simple-list.py track length: 0 track length: 1 15 track length: 2 15 8.2 track length: 3 15 8.2 Berlin track length: 4 15 8.2 Berlin 15 Step 4: Searching the List

Searching the entire list is done using the method unordered_search(). It requires an additional parameter for the value to be searched. The head of the list is the starting point.

While searching we count the nodes. To indicate a match we use the corresponding node number. The method unordered_search() returns a list of node numbers that represent the matches. As an example, both the first and fourth node contain the value 15. The search for 15 results in a list with two elements: [1, 4].

Listing 9: The search method unorderedsearch()_

def unordered_search (self, value): "search the linked list for the node that has this value" # define current_node current_node = self.head # define position node_id = 1 # define list of results results = [] while current_node is not None: if current_node.has_value(value): results.append(node_id) # jump to the linked node current_node = current_node.next node_id = node_id + 1 return results Step 5: Removing an Item from the List

Removing a node from the list requires adjusting just one reference - the one pointing to the node to be removed must now point to the next one. This reference is kept by the node to be removed, and must be replaced. In the background the Python garbage collector takes care of unreferenced objects, and tidies up.

The following method is named remove_list_item_by_id(). As a parameter it refers to the number of the node similar to the value returned by unordered_search().

Listing 10: Removing a node by node number

def remove_list_item_by_id(self, item_id): "remove the list item with the item id" current_id = 1 current_node = self.head previous_node = None while current_node is not None: if current_id == item_id: # if this is the first node (head) if previous_node is not None: previous_node.next = current_node.next else: self.head = current_node.next # we don't have to look any further return # needed for the next iteration previous_node = current_node current_node = current_node.next current_id = current_id + 1 return Step 6: Creating a Double-Linked List

To create a double-linked list it feels natural just to extend the ListNode class by creating an additional reference to the previous node. This affects the methods for adding, removing, and sorting nodes. As shown in Listing 11, a new property named previous has been added to store the reference pointer to the previous node in the list. We'll change our methods to use this property for tracking and traversing nodes as well.

Listing 11: Extended list node class

class ListNode: def __init__(self, data): "constructor class to initiate this object" # store data self.data = data # store reference (next item) self.next = None # store reference (previous item) self.previous = None return def has_value(self, value): "method to compare the value with the node data" if self.data == value: return True else: return False

Now we are able to define a double-linked list as follows:

Listing 12: A DoubleLinkedList class

class DoubleLinkedList: def __init__(self): "constructor to initiate this object" self.head = None self.tail = None return def list_length(self): "returns the number of list items" count = 0 current_node = self.head while current_node is not None: # increase counter by one count = count + 1 # jump to the linked node current_node = current_node.next return count def output_list(self): "outputs the list (the value of the node, actually)" current_node = self.head while current_node is not None: print(current_node.data) # jump to the linked node current_node = current_node.next return def unordered_search (self, value): "search the linked list for the node that has this value" # define current_node current_node = self.head # define position node_id = 1 # define list of results results = [] while current_node is not None: if current_node.has_value(value): results.append(node_id) # jump to the linked node current_node = current_node.next node_id = node_id + 1 return results

As described earlier, adding nodes requires a bit more action. Listing 13 shows how to implement that:

Listing 13: Adding nodes in a double-linked list

def add_list_item(self, item): "add an item at the end of the list" if isinstance(item, ListNode): if self.head is None: self.head = item item.previous = None item.next = None self.tail = item else: self.tail.next = item item.previous = self.tail self.tail = item return

Removing an item from the list similar costs have to be taken into account. Listing 14 shows how to do that:

Listing 14: Removing an item from a double-linked list

def remove_list_item_by_id(self, item_id): "remove the list item with the item id" current_id = 1 current_node = self.head while current_node is not None: previous_node = current_node.previous next_node = current_node.next if current_id == item_id: # if this is the first node (head) if previous_node is not None: previous_node.next = next_node if next_node is not None: next_node.previous = previous_node else: self.head = next_node if next_node is not None: next_node.previous = None # we don't have to look any further return # needed for the next iteration current_node = next_node current_id = current_id + 1 return

Listing 15 shows how to use the class in a Python program.

Listing 15: Building a double-linked list

# create three single nodes node1 = ListNode(15) node2 = ListNode(8.2) node3 = ListNode("Berlin") node4 = ListNode(15) track = DoubleLinkedList() print("track length: %i" % track.list_length()) for current_node in [node1, node2, node3, node4]: track.add_list_item(current_node) print("track length: %i" % track.list_length()) track.output_list() results = track.unordered_search(15) print(results) track.remove_list_item_by_id(4) track.output_list()

As you can see, we can use the class exactly as before when it was just a single-linked list. The only change is the internal data structure.

Step 7: Creating Double-Linked Lists with deque

Since other engineers have faced the same issue, we can simplify things for ourselves and use one of the few existing implementations available. In Python, we can use the deque object from the collections module. According to the module documentation:

Deques are a generalization of stacks and queues (the name is pronounced "deck" and is short for "double-ended queue"). Deques support thread-safe, memory efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction.

For example, this object contains the following methods:

  • append(): add an item to the right side of the list (end)
  • append_left(): add an item to the left side of the list (head)
  • clear(): remove all items from the list
  • count(): count the number of items with a certain value
  • index(): find the first occurrence of a value in the list
  • insert(): insert an item in the list
  • pop(): remove an item from the right side of a list (end)
  • popleft(): remove an item from the left side of a list (head)
  • remove(): remove an item from the list
  • reverse(): reverse the list

The underlying data structure of deque is a Python list which is double-linked. The first list node has the index 0. Using deque leads to a significant simplification of the ListNode class. The only thing we keep is the class variable data to store the node value. Listing 16 is as follows:

Listing 16: ListNode class with deque (simplified)

from collections import deque class ListNode: def __init__(self, data): "constructor class to initiate this object" # store data self.data = data return

The definition of nodes does not change, and is similar to Listing 2. With this knowledge in mind we create a list of nodes as follows:

Listing 17: Creating a List with deque

track = deque([node1, node2, node3]) print("three items (initial list):") for item in track: print(item.data)

Adding an item at the beginning of the list works with the append_left() method as Listing 18 shows:

Listing 18: Adding an element at the beginning of a list

# add an item at the beginning node4 = ListNode(15) track.append_left(node4) print("four items (added as the head):") for item in track: print(item.data)

Similarly, append() adds a node at the end of the list as Listing 19 shows:

Listing 19: Adding an element at the end of the list

# add an item at the end node5 = ListNode("Moscow") print("five items (added at the end):") track.append(node5) for item in track: print(item.data) Conclusion

Linked lists as data structures are easy to implement, and offer great usage flexibility. It is done with a few lines of code. As an improvement you could add a node counter - a class variable that simply holds the number of nodes in the list. This reduces the determination of the list length to a single operation with O(1), and you do not have to traverse the entire list.

For further reading and alternative implementations you may have a look here:

Acknowledgements

The author would like to thank Gerold Rupprecht and Mandy Neumeyer for their support, and comments while preparing this article.

Categories: FLOSS Project Planets

James Bromberger: Web Security 2017

Planet Debian - Mon, 2017-11-06 10:51

I started web development around late 1994. Some of my earliest paid web work is still online (dated June 1995). Clearly, that was a simpler time for content! I went on to be ‘Webmaster’ (yes, for those joining us in the last decade, that was a job title once) for UWA, and then for Hartley Poynton/JDV.com at time when security became important as commerce boomed online.

At the dawn of the web era, the consideration of backwards compatibility with older web clients (browsers) was deemed to be important; content had to degrade nicely, even without any CSS being applied. As the years stretched out, the legacy became longer and longer. Until now.

In mid-2018, the Payment Card Industry (PCI) Data Security Standard (DSS) 3.2 comes into effect, requiring card holder environments to use (at minimum) TLS 1.2 for the encrypted transfer of data. Of course, that’s also the maximum version typically available today (TLS 1.3 is in draft 21 at this point in time of writing). This effort by the PCI is forcing people to adopt new browsers that can do the TLS 1.2 protocol (and the encryption ciphers that permits), typically by running modern/recent Chrome, Firefox, Safari or Edge browsers. And for the majority of people, Chrome is their choice, and the majority of those are all auto-updating on every release.

Many are pushing to be compliant with the 2018 PCI DSS 3.2 as early as possible; your logging of negotiated protocols and ciphers will show if your client base is ready as well. I’ve already worked with one government agency to demonstrate they were ready, and have already helped disable TLS 1.0 and 1.1 on their public facing web sites (and previously SSL v3). We’ve removed RC4 ciphers, 3DES ciphers, and enabled ephemeral key ciphers to provide forward secrecy.

Web developers (writing Javascript and using various frameworks) can rejoice — the age of having to support legacy MS IE 6/7/8/9/10 is pretty much over. None of those browsers support TLS 1.2 out of the box (IE 10 can turn this on, but for some reason, it is off by default). This makes Javascript code smaller as it doesn’t have to have conditional code to work with the quirks of those older clients.

But as we find ourselves with modern clients, we can now ask those clients to be complicit in our attempts to secure the content we serve. They understand modern security constructs such as Content Security Policies and other HTTP security-related headers.

There’s two tools I am currently using to help in this battle to improve web security. One is SSLLabs.com, the work of Ivan Ristić (and now owned/sponsored by Qualys). This tool gives a good view of the encryption in flight (protocols, ciphers), chain of trust (certificate), and a new addition of checking DNS records for CAA records (which I and others piled on a feature request for AWS Route53 to support). The second tool is Scott Helm’s SecurityHeaders.io, which looks at the HTTP headers that web content uses to ask browsers to enforce security on the client side.

There’s a really important reason why these tools are good; they are maintained. As new recommendations on ciphers, protocols, signature algorithms or other actions become recommended, they’re updated on these tools. And these tools are produced by very small, but agile teams — like one person teams, without the bureaucracy (and lag) associated with large enterprise tools. But these shouldn’t be used blindly. These services make suggestions, and you should research them yourselves. For some, not all the recommendations may meet your personal risk profile. Personally, I’m uncomfortable with Public-Key-Pins, so that can wait for a while — indeed, Chrome has now signalled they will drop this.

So while PCI is hitting merchants with their DSS-compliance stick (and making it plainly obvious what they have to do), we’re getting a side-effect of having a concrete reason for drawing a line under where our backward compatibility must stretch back to, and the ability to have the web client assist in ensure security of content.

Categories: FLOSS Project Planets

Matt Raible: Life as an Open Source Developer, One Year Later

Planet Apache - Mon, 2017-11-06 10:35

It's been a little over a year since I wrote about life as an open source developer. I'm happy to say I still haven't written a single line of proprietary code. Of course, things have changed a lot in the last year. I thought going full-time would bring stability to my career. Instead, six months into it we joined forces with Okta.

The transition was rough at first. At Stormpath, we had full-featured SDKs and a great relationship with developers that used our service. We were able to port many of our SDKs to work with Okta, but we discovered that Okta didn't have a great relationship with developers. In fact, their developer blog hadn't been updated in over a year when we arrived.

On the upside, Okta's API supported standards like SAML, OAuth, and OpenID Connect. Open standards made it possible to use other frameworks and not have to rely on our own. I was pumped to find that Spring Security made it easy to integrate with SAML and OAuth. In fact, I was able to leverage these standards to add OIDC support to JHipster.

Okta's new developer console and open pricing are just a couple examples of improved happenings since we arrived. The Okta Spring Boot Starter and JavaScript libraries for Node.js, Angular, and React are also pretty awesome.

I'm happy to say my contributions on GitHub almost doubled in the last year!

As far as stress is concerned, that hasn't changed much. I've learned that the stress I feel from work is still causing me to have high blood pressure. When I measure it in the mornings, or at night, it's fine. When I measure it during the day, it's elevated. I believe my high blood pressure is caused by doing too much. Sure, it's great to be productive and accomplish a lot for my company, but it's killing me.

Therein lies the rub. I get to create my job. All I'm asked to do is write a blog post per week and speak at a conference (or meetup) once a month. Yet I'm doing way more than that. Since this time last year, I've delivered 33 presentations, in 13 different cities. I keep a page on this blog updated with all my presentations.

Next year, I still plan to speak a lot, but I plan on toning things down a bit. I'll be concentrating on US cities, with large Java user groups, and I'll be limiting my travel overseas.

Outside of my health concerns, I'm still loving my job. The fact that I get paid to speak at great conferences, write example applications, and discover new ways to do things is awesome. It's also pretty sweet that I was able to update the JHipster Mini-Book and upgrade 21-Points Health during work hours. The fact that I got featured on the main Okta blog was pretty cool too.

The good news is my overseas travel isn't done this year. Today, I leave for Devoxx Belgium, one of my favorite conferences. It'll be my first time in Antwerp without Trish. However, I'm speaking with friends Josh Long and Deepu Sasidharan, so it's sure to be a good time. Traveling to Devoxx Morocco should be fun too. I've never been to Casablanca before.

In December, you can catch me at SpringOne and The Rich Web Experience. Next year, I'll be speaking at Denver Microservices meetup, Utah JUG, Seattle JUG, and JazzCon. I plan to do a JUG tour in the northeast US too.

You might've noticed I don't write a lot of technical content here anymore. That's because I'm doing most of my writing on developer.okta.com/blog. I'm still writing for InfoQ as well. I really enjoyed attending the JavaOne keynotes and writing up what I saw.

I'll leave you with this, a project I'm working on actively and plan to finish before Devoxx Morocco.

Had a good hacking session today w/ @java_hipster & @Ionicframework. Creating a JHipster-enabled Ionic client works! https://t.co/5biHQDO941

— Matt Raible (@mraible) November 3, 2017

Viva la Open Source!

Categories: FLOSS Project Planets

Doug Hellmann: shutil — High-level File Operations — PyMOTW 3

Planet Python - Mon, 2017-11-06 09:00
The shutil module includes high-level file operations such as copying and archiving. Read more… This post is part of the Python Module of the Week series for Python 3. See PyMOTW.com for more articles from the series.
Categories: FLOSS Project Planets
Syndicate content