Feeds

Promet Source: Great Websites are Created before the First Line of Code is Written

Planet Drupal - Sun, 2019-05-19 17:44
When you’re surrounded by a team of awesome developers, you might think that a statement such as, “Great Websites are Created before the First Line of Code is Written,” isn’t going to be met with a lot of enthusiasm. As it turns out, our developers tend to be among the greatest supporters of the kind of Human-Centered Design engagements that get all stakeholders on the same page and create a roadmap for transformative possibilities. 
Categories: FLOSS Project Planets

Louis-Philippe Véronneau: Am I Fomu ?

Planet Debian - Sun, 2019-05-19 17:30

A few months ago at FOSDEM 2019 I got my hands on a pre-production version of the Fomu, a tiny open-hardware FPGA board that fits in your USB port. Building on the smash hit of the Tomu, the Fomu uses an ICE40UP5K FPGA instead of an ARM core.

I've never really been into hardware hacking, and much like hacking on the Linux kernel, messing with wires and soldering PCB boards always intimidated me. From my perspective, playing around with the Fomu looked like a nice way to test the water without drowning in it.

Since the bootloader wasn't written at the time, when I first got my Fomu hacker board there was no easy way to test if the board was working. Lucky for me, Giovanni Mascellani was around and flashed a test program on it using his Raspberry Pi and a bunch of hardware probes. I was really impressed by the feat, but it also seemed easy enough that I could do it.

Back at home, I ordered a Raspberry Pi, bought some IC hooks and borrowed a soldering iron from my neighbour. It had been a while since I had soldered anything! Last time I did I was 14 years old and trying to save a buck making my own fencing mask and body cords...

My goal was to test foboot, the new DFU-compatible bootloader recently written by Sean Cross (xobs) to make flashing programs on the board more convenient. Replicating Giovanni's setup, I flashed the Fomu Raspbian image on my Pi and compiled the bootloader.

It took me a good 15 minutes to connect the IC hooks to the board, but I was successfully able to flash foboot on the Fomu! The board now greets me with:

[ 9751.556784] usb 8-2.4: new full-speed USB device number 31 using xhci_hcd [ 9751.841038] usb 8-2.4: New USB device found, idVendor=1209, idProduct=70b1, bcdDevice= 1.01 [ 9751.841043] usb 8-2.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 9751.841046] usb 8-2.4: Product: Fomu Bootloader (0) v1.4-2-g1913767 [ 9751.841049] usb 8-2.4: Manufacturer: Kosagi

I don't have a use case for the Fomu yet, but I am sure by the time the production version ships out, people will have written interesting programs I can flash on it. In the meantime, it'll blink slowly in my laptop's USB port.

Categories: FLOSS Project Planets

GNU Guix: GNU Guix 1.0.1 released

GNU Planet! - Sun, 2019-05-19 17:30

We are pleased to announce the release of GNU Guix version 1.0.1. This new version fixes bugs in the graphical installer for the standalone Guix System.

The release comes with ISO-9660 installation images, a virtual machine image, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries. Guix users can update by running guix pull.

It’s been just over two weeks since we announced 1.0.0—two weeks and 706 commits by 40 people already!

This is primarily a bug-fix release, specifically focusing on issues in the graphical installer for the standalone system:

  • The most embarrassing bug would lead the graphical installer to produce a configuration where %base-packages was omitted from the packages field. Consequently, the freshly installed system would not have the usual commands in $PATH—ls, ps, etc.—and Xfce would fail to start for that reason. See below for a “post-mortem” analysis.
  • The wpa-supplicant service would sometimes fail to start in the installation image, thereby breaking network access; this is now fixed.
  • The installer now allows you to toggle the visibility of passwords and passphrases, and it no longer restricts their length.
  • The installer can now create Btrfs file systems.
  • network-manager-applet is now part of %desktop-services, and thus readily usable not just from GNOME but also from Xfce.
  • The NEWS file has more details, but there were also minor bug fixes for guix environment, guix search, and guix refresh.

A couple of new features were reviewed in time to make it into 1.0.1:

  • guix system docker-image now produces an OS image with an “entry point”, which makes it easier to use than before.
  • guix system container has a new --network option, allowing the container to share networking access with the host.
  • 70 new packages were added and 483 packages were updated.
  • Translations were updated as usual and we are glad to announce a 20%-complete Russian translation of the manual.
Recap of bug #35541

The 1.0.1 release was primarily motivated by bug #35541, which was reported shortly after the 1.0.0 release. If you installed Guix System with the graphical installer, chances are that, because of this bug, you ended up with a system where all the usual GNU/Linux commands—ls, grep, ps, etc.—were not in $PATH. That in turn would also prevent Xfce from starting, if you chose that desktop environment for your system.

We quickly published a note in the system installation instructions explaining how to work around the issue:

  • First, install packages that provide those commands, along with the text editor of your choice (for example, emacs or vim):

    guix install coreutils findutils grep procps sed emacs vim
  • At this point, the essential commands you would expect are available. Open your configuration file with your editor of choice, for example emacs, running as root:

    sudo emacs /etc/config.scm
  • Change the packages field to add the “base packages” to the list of globally-installed packages, such that your configuration looks like this:

    (operating-system ;; … snip … (packages (append (list (specification->package "nss-certs")) %base-packages)) ;; … snip … )
  • Reconfigure the system so that your new configuration is in effect:

    guix pull && sudo guix system reconfigure /etc/config.scm

If you already installed 1.0.0, you can perform the steps above to get all these core commands back.

Guix is purely declarative: if you give it an operating system definition where the “base packages” are not available system-wide, then it goes ahead and installs precisely that. That’s exactly what happened with this bug: the installer generated such a configuration and passed it to guix system init as part of the installation process.

Lessons learned

Technically, this is a “trivial” bug: it’s fixed by adding one line to your operating system configuration and reconfiguring, and the fix for the installer itself is also a one-liner. Nevertheless, it’s obviously a serious bug for the impression it gives—this is not the user experience we want to offer. So how did such a serious bug go through unnoticed?

For several years now, Guix has had a number of automated system tests running in virtual machines (VMs). These tests primarily ensure that system services work as expected, but some of them specifically test system installation: installing to a RAID or encrypted device, with a separate /home, using Btrfs, etc. These tests even run on our continuous integration service (search for the “tests.*” jobs there).

Unfortunately, those installation tests target the so-called “manual” installation process, which is scriptable. They do not test the installer’s graphical user interface. Consequently, testing the user interface (UI) itself was a manual process. Our attention was, presumably, focusing more on UI aspects since—so we thought—the actual installation tests were already taken care of by the system tests. That the generated system configuration could be syntactically correct but definitely wrong from a usability viewpoint perhaps didn’t occur to us. The end result is that the issue went unnoticed.

The lesson here is that: manual testing should also look for issues in “unexpected places”, and more importantly, we need automated tests for the graphical UI. The Debian and Guix installer UIs are similar—both using the Newt toolkit. Debian tests its installer using “pre-seeds” (code), which are essentially answers to all the questions and choices the UI would present. We could adopt a similar approach, or we could test the UI itself at a lower level—reading the screen, and simulating key strokes. UI testing is notoriously tricky so we’ll have to figure out how to get there.

Conclusion

Our 1.0 party was a bit spoiled by this bug, and we are sorry that installation was disappointing to those of you who tried 1.0. We hope 1.0.1 will allow you to try and see what declarative and programmable system configuration management is like, because that’s where the real value of Guix System is—the graphical installer is icing on the cake.

Join us on #guix and on the mailing lists!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

Categories: FLOSS Project Planets

Spinning Code: SC DUG May 2019

Planet Drupal - Sun, 2019-05-19 15:45

For this month’s SC DUG, Mauricio Orozco from the South Carolina Commission for Minority Affairs shared his notes and lessons learned during his first DrupalCon North America.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. If you want to see a polished version checkout our group members’ talks at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.

Categories: FLOSS Project Planets

KDE Craft now delivers with vlc and libvlc on macOS

Planet KDE - Sun, 2019-05-19 14:56

Lacking VLC and libvlc in Craft, phonon-vlc cannot be built successfully on macOS. It caused the failed building of KDE Connect in Craft.

As a small step of my GSoC project, I managed to build KDE Connect by removing the phonon-vlc dependency. But it’s not a good solution. I should try to fix phonon-vlc building on macOS. So during the community bonding period, to know better the community and some important tools in the Community, I tried to fix phonon-vlc.

Fixing phonon-vlc

At first, I installed libVLC in MacPorts. All Header files and libraries are installed into the system path. So theoretically, there should not be a problem of the building of phonon-vlc. But an error occurred:

We can figure that the compiling is ok, the error is just at the end, during the linking. The error message tells us there is no QtDBus lib. So to fix it, I made a small patch to add QtDBus manually in the CMakeLists file.

1
2
3
4
5
6
7
8
9
10
11
12
13
diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
index 47427b2..1cdb250 100644
--- a/src/CMakeLists.txt
+++ b/src/CMakeLists.txt
@@ -81,7 +81,7 @@ if(APPLE)
endif(APPLE)

automoc4_add_library(phonon_vlc MODULE ${phonon_vlc_SRCS})
-qt5_use_modules(phonon_vlc Core Widgets)
+qt5_use_modules(phonon_vlc Core Widgets DBus)

set_target_properties(phonon_vlc PROPERTIES
PREFIX ""

And it works well!

A small problem is that Hannah said she didn’t get an error during linking. It may be something about Qt version. If someone gets some idea, welcome to contact me.

My Qt version is 5.12.3.

Fixing VLC

To fix VLC, I tried to pack the VLC binary just like the one on Windows.

But unfortunately, in the .app package, the Header files are not completed. Comparing to Windows version, the entire plugins folder is missing.

So I made a patch for all those files. But the patch is too huge (25000 lines!). So it is not a good idea to merge it into master branch.

Thanks to Hannah, she has made a libs/vlc blueprint in the master branch, so in Craft, feel free to install it by running craft libs/vlc.

Troubleshooting

If you cannot build libs/vlc, just like me, you can also choose the binary version VLC with Header files patch.

The patch of Headers for binary is too big. Adding it to the master branch is not a good idea. So I published it on my own repository:
https://github.com/Inokinoki/craft-blueprints-inoki

To use it, run craft --add-blueprint-repository https://github.com/inokinoki/craft-blueprints-inoki.git and the blueprint(s) will be added into your local blueprint directory.

Then, craft binary/vlc will help get the vlc binary and install Header files, libraries into Craft include path and lib path. Finally, you can build what you want with libvlc dependency.

Conclusion

Up to now, KDE Connect is using QtMultimedia rather than phonon and phonon-vlc to play a sound. But this work could be also useful for other applications or libraries who depend on phonon, phonon-vlc or vlc. This small step may help build them successfully on macOS.

I hope this can help someone!

Categories: FLOSS Project Planets

Joey Hess: 80 percent

Planet Debian - Sun, 2019-05-19 12:43

I added dh to debhelper a decade ago, and now Debian is considering making use of dh mandatory. Not being part of Debian anymore, I'm in the position of needing to point out something important about it anyway. So this post is less about pointing in a specific direction as giving a different angle to think about things.

debhelper was intentionally designed as a 100% solution for simplifying building Debian packages. Any package it's used with gets simplified and streamlined and made less a bother to maintain. The way debhelper succeeds at 100% is not by doing everything, but by being usable in little pieces, that build up to a larger, more consistent whole, but that can just as well be used sparingly.

dh was intentionally not designed to be a 100% solution, because it is not a collection of little pieces, but a framework. I first built an 80% solution, which is the canned sequences of commands it runs plus things like dh_auto_build that guess at how to build any software. Then I iterated to get closer to 100%. The main iteration was override targets in the debian/rules file, to let commands be skipped or run out of order or with options. That closed dh's gap by a further 80%.

So, dh is probably somewhere around a 96% solution now. It may have crept closer still to 100%, but it seems likely there is still a gap, because it was never intended to completely close the gap.

Starting at 100% and incrementally approaching 100% are very different design choices. The end results can look very similar, since in both cases it can appear that nearly everyone has settled on doing things in the same way. I feel though, that the underlying difference is important.

PS: It's perhaps worth re-reading the original debhelper email and see how much my original problems with debstd would also apply to dh if its use were mandatory!

Categories: FLOSS Project Planets

Codementor: Python's Counter - Part 1

Planet Python - Sun, 2019-05-19 09:41
This article provides an introduction to Python's in-built 'Counter' tool, and is part of a series of articles on some often ignored, or undiscovered, in-built python libraries.
Categories: FLOSS Project Planets

Okular: another improvement to annotation

Planet KDE - Sun, 2019-05-19 09:40

Continuing with the addition of line terminating style for the Straight Line annotation tool, I have added the ability to select the line start style also. The required code changes are committed today.

Line annotation with circled start and closed arrow ending.

Currently it is supported only for PDF documents (and poppler version ≥ 0.72), but that will change soon — thanks to another change by Tobias Deiminger under review to extend the functionality for other documents supported by Okular.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RQuantLib 0.4.9: Another small updates

Planet Debian - Sun, 2019-05-19 09:28

A new version 0.4.9 of RQuantLib reached CRAN and Debian. It completes the change of some internals of RQuantLib to follow suit to an upstream change in QuantLib. We can now seamlessly switch between shared_ptr<> from Boost and from C++11 – Luigi wrote about the how and why in an excellent blog post that is part of a larger (and also excellent) series of posts on QuantLib internals.

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.9 (2019-05-15)
  • Changes in RQuantLib code:

    • Completed switch to QuantLib::ext namespace wrappers for either shared_ptr use started in 0.4.8.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Andrew Cater: systemd.unit=rescue.target

Planet Debian - Sun, 2019-05-19 07:17
Just another quick one liner: a Grub config argument which I had to dig for but which is really useful when this sort of thing happens

Faced with a server that was rebooting after an upgrade and dropping to sytemd emergency target:

Rebooting and adding

ssytemd.unit=rescue.target

to the end of the Linux command line in the Grub config as the machine booted and then pressing F10 allowed me to drop to a full featured rescue environment with read/write access to the disk and to sort out the partial upgrade mess.
Categories: FLOSS Project Planets

David Kalnischkies: Newbie contributor: A decade later

Planet Debian - Sun, 2019-05-19 06:34

Time flies. On this day, 10 years ago, a certain someone sent in his first contribution to Debian in Debbugs#433007: --dry-run can mark a package manually installed (in real life). What follows is me babbling randomly about what lead to and happened after that first patch.

That wasn't my first contribution to open source: I implemented (more like copy-pasted) mercurial support in the VCS plugin in the editor I was using back in 2008: Geany – I am pretty sure my code is completely replaced by now, I just remain being named in THANKS, which is very nice considering I am not a user anymore. My contributions to apt were coded in vim(-nox) already.

It was the first time I put my patch under public scrutiny through – my contribution to geanyvc was by private mail to the plugin maintainer – and not by just anyone but by the venerable masters operating in a mailing list called deity@…

I had started looking into apt code earlier and had even written some patches for me without actually believing that I would go as far as handing them in. Some got in anyhow later, like the first commit with my name dated May the 7th allowing codenames to be used in pinning which dates manpage changes as being written on the 4th. So then I really started with apt is lost to history by now, but today (a decade ago) I got serious: I joined IRC, the mailing list and commented the bugreport mentioned above. I even pushed my branch of random things I had done to apt to launchpad (which back then was hosting the bzr repository).

The response was overwhelming. The bugreport has no indication of it, but Michael jumped at me. I realized only later that he was the only remaining active team member in the C++ parts. Julian was mostly busy with Python at the time and Christian turned out to be Mr. L18n with duties all around Debian. The old guard had left as well as the old-old guard before them.

I got quickly entangled in everything. Michael made sure I got invited by Canonical to UDS-L in November of 2009 – 6 months after saying hi. I still can't really believe that 21y old me made his first-ever fly across the ocean to Dallas, Texas (USA) because some people on the internet invited him over. So there was I, standing in front of the airport with the slow realisation that while I had been busy being scared about the fly, the week and everything I never really had worried about how to get from the airport to the hotel. An inner monologue started: "You got this, you just need the name of the hotel and look for a taxi. You wrote the name down right? No? Okay, you can remember the name anyhow, right? Just say it and … why are you so silent? Say it! … Goddammit, you are …" – "David?" was interrupting my inner voice. Of all people in the world, I happened to meet Michael for the first time right in front of the airport. "Just as planned you meany inner voice", I was kidding myself after getting in a taxi with a few more people.

I meet so many people over the following days! It was kinda scary, very taxing for an introvert but also 100% fun. I also meet the project that would turn me from promising newbie contributor to APT developer via Google Summer of Code 2010: MultiArch. There was a session about it and this time around it should really happen. I was sitting in the back, hiding but listening closely. Thankfully nobody had called me out as I was scared: I can't remember who it was, but someone said that in dpkg MultiArch could be added in two weeks. Nobody had to say it, for me it was clear that this meant APT would be the blocker as that most definitely would not happen in two weeks. Not even months. More like years if at all. What was I to do? Cut my looses and run? Na, sunk cost fallacy be damned. I hadn't lost anything, I had learned and enjoyed plenty of things granted to me by supercow and that seemed like a good opportunity to give back.

But there was so much to do. The cache had to grow dynamically (remember "mmap ran out of room" and feel old), commandline interfaces needed to be adapted, the resolver… oh my god, the resolver! And to top it all of APT had no tests to speak of. So after the UDS I started tackling them all: My weekly reports for GSoC2010 provide a glimpse into the abyss but before and after lots happened still. Many of the decisions I made back then are still powering APT. The shell scripting framework I wrote to be able to perform some automatic testing of apt as I got quickly tired of manual testing consists as of today of 255 scripts run not only by me but many CI services including autopkgtest. It probably prevented me from introducing thousands of regressions over the years. Even through it grew into kind of a monster (2000+ lines of posix shellscript providing the test framework alone), can be a bit slow (it can take more than the default 30min on salsa; for me locally it is about 3 minutes) and it has a strange function naming convention (all lowercase no separator: e.g. insertinstalledpackage). Nobody said you can't make mistakes.

And I made them all: First bug caused by me. First regression with complains hitting d-devel. First security bug. It was always scary. It still is, especially as simple probability kicks in and the numbers increase combined with seemingly more hate generated on the internet: The last security bug had people identify me as purposefully malicious. All my contributions should be removed – reading that made me smile.

Lots and lots of things happened since my first patch. git tells me that 174+ people contributed to APT over the years. The top 5 of contributors of all time (as of today) list:

  • 2904 commits by Michael Vogt (active mostly as wizard)
  • 2647 commits by David Kalnischkies (active)
  • 1304 commits by Arch Librarian (all retired, see note)
  • 1008 commits by Julian Andres Klode (active)
  • 641 commits by Christian Perrier (retired)

Note that "Arch Librarian" isn't a person, but a conversion artefact: Development started in 1998 in CVS which was later converted to arch (which eventually turned into bzr) and this CVS→arch conversion preserved the names of the initial team as CVS call signs in the commit messages only. Many of them belong hence to Jason Gunthorpe (jgg). Christians commits meanwhile are often times imports of po files for others, but there is still lots of work involved with this so that spot is well earned even if nowadays with git we have the possibility of attributing the translator not only in the changelog but also as author in the commit.

There is a huge gap after the top 5 with runner up Matt Zimmerman with 116 counted commits (but some Arch Librarian commits are his, too). And that gap for me to claim the throne isn't that small either, but I am working on it… 😉︎ I have also put enough distance between me and Julian that it will still take a while for him to catch up even if he is trying hard at the moment.

The next decade will be interesting: Various changes are queuing up in the master branch for a major break in ABI and API and a bunch of new stuff is still in the pipeline or on the drawing board. Some of these things I patched in all these years ago never made it into apt so far: I intend to change that this decade – you are supposed to have read this in "to the moon" style and erupt in a mighty cheer now so that you can't hear the following – time permitting, as so far this is all talk on my part.

The last year(s) had me not contribute as much as I would have liked due to – pardon my french – crazy shit I will hopefully be able to leave behind this (or at least next) year. I hadn't thought it would show that drastically in the stats, but looking back it is kinda obvious:

  • In year 2009 David made 167 commits
  • In year 2010 David made 395 commits
  • In year 2011 David made 378 commits
  • In year 2012 David made 274 commits
  • In year 2013 David made 161 commits
  • In year 2014 David made 352 commits
  • In year 2015 David made 333 commits
  • In year 2016 David made 381 commits
  • In year 2017 David made 110 commits
  • In year 2018 David made 78 commits
  • In year 2019 David made 18 commits so far

Lets make that number great again this year as I finally applied and got approved as DD in 2016 (I didn't want to apply earlier) and decreasing contributions (completely unrelated but still) since then aren't a proper response! 😉︎

Also: I enjoyed the many UDSes, the DebConfs and other events I got to participate in in the last decade and hope there are many more yet to come!

tl;dr: Looking back at the last decade made me realize that a) I seem to have a high luck stat, b) too few people contribute to apt given that I remain the newest team member and c) I love working on apt for all the things which happened due to it. If only I could do that full-time like I did as part of summer of code…

P.S.: The series APT for … will return next week with a post I had promised months ago.

Categories: FLOSS Project Planets

libqaccessibilityclient 0.4.1

Planet KDE - Sun, 2019-05-19 06:31
libqaccessibilityclient 0.4.1 is out now https://download.kde.org/stable/libqaccessibilityclient/ http://embra.edinburghlinux.co.uk/~jr/tmp/pkgdiff_reports/libqaccessibilityclient/0.4.0_to_0.4.1/changes_report.html Signed by Jonathan Riddell https://sks-keyservers.net/pks/lookup?op=vindex&search=0xEC94D18F7F05997E
  • version 0.4.1
  • Use only undeprecated KDEInstallDirs variables
  • KDECMakeSettings already cares for CMAKE_AUTOMOC & BUILD_TESTING
  • Fix use in cross compilation
  • Q_ENUMS -> Q_ENUM
  • more complete release instructions
by
Categories: FLOSS Project Planets

bison @ Savannah: Bison 3.4 released [stable]

GNU Planet! - Sun, 2019-05-19 06:01

We are happy to announce the release of Bison 3.4.

A particular focus was put on improving the diagnostics, which are now
colored by default, and accurate with multibyte input. Their format was
also changed, and is now similar to GCC 9's diagnostics.

Users of the default backend (yacc.c) can use the new %define variable
api.header.include to avoid duplicating the content of the generated header
in the generated parser. There are two new examples installed, including a
reentrant calculator which supports recursive calls to the parser and
Flex-generated scanner.

See below for more details.

==================================================================

Bison is a general-purpose parser generator that converts an annotated
context-free grammar into a deterministic LR or generalized LR (GLR) parser
employing LALR(1) parser tables. Bison can also generate IELR(1) or
canonical LR(1) parser tables. Once you are proficient with Bison, you can
use it to develop a wide range of language parsers, from those used in
simple desk calculators to complex programming languages.

Bison is upward compatible with Yacc: all properly-written Yacc grammars
work with Bison with no change. Anyone familiar with Yacc should be able to
use Bison with little trouble. You need to be fluent in C, C++ or Java
programming in order to use Bison.

Here is the GNU Bison home page:
https://gnu.org/software/bison/

==================================================================

Here are the compressed sources:
https://ftp.gnu.org/gnu/bison/bison-3.4.tar.gz (4.1MB)
https://ftp.gnu.org/gnu/bison/bison-3.4.tar.xz (3.1MB)

Here are the GPG detached signatures[*]:
https://ftp.gnu.org/gnu/bison/bison-3.4.tar.gz.sig
https://ftp.gnu.org/gnu/bison/bison-3.4.tar.xz.sig

Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify bison-3.4.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys 0DDCAA3278D5264E

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
Autoconf 2.69
Automake 1.16.1
Flex 2.6.4
Gettext 0.19.8.1
Gnulib v0.1-2563-gd654989d8

==================================================================

NEWS

* Noteworthy changes in release 3.4 (2019-05-19) [stable] ** Deprecated features The %pure-parser directive is deprecated in favor of '%define api.pure' since Bison 2.3b (2008-05-27), but no warning was issued; there is one now. Note that since Bison 2.7 you are strongly encouraged to use '%define api.pure full' instead of '%define api.pure'. ** New features *** Colored diagnostics As an experimental feature, diagnostics are now colored, controlled by the new options --color and --style. To use them, install the libtextstyle library before configuring Bison. It is available from https://alpha.gnu.org/gnu/gettext/ for instance https://alpha.gnu.org/gnu/gettext/libtextstyle-0.8.tar.gz The option --color supports the following arguments: - always, yes: Enable colors. - never, no: Disable colors. - auto, tty (default): Enable colors if the output device is a tty. To customize the styles, create a CSS file similar to /* bison-bw.css */ .warning { } .error { font-weight: 800; text-decoration: underline; } .note { } then invoke bison with --style=bison-bw.css, or set the BISON_STYLE environment variable to "bison-bw.css". *** Disabling output When given -fsyntax-only, the diagnostics are reported, but no output is generated. The name of this option is somewhat misleading as bison does more than just checking the syntax: every stage is run (including checking for conflicts for instance), except the generation of the output files. *** Include the generated header (yacc.c) Before, when --defines is used, bison generated a header, and pasted an exact copy of it into the generated parser implementation file. If the header name is not "y.tab.h", it is now #included instead of being duplicated. To use an '#include' even if the header name is "y.tab.h" (which is what happens with --yacc, or when using the Autotools' ylwrap), define api.header.include to the exact argument to pass to #include. For instance: %define api.header.include {"parse.h"} or %define api.header.include {<parser/parse.h>} *** api.location.type is now supported in C (yacc.c, glr.c) The %define variable api.location.type defines the name of the type to use for locations. When defined, Bison no longer defines YYLTYPE. This can be used in programs with several parsers to factor their definition of locations: let one of them generate them, and the others just use them. ** Changes *** Graphviz output In conformance with the recommendations of the Graphviz team, if %require "3.4" (or better) is specified, the option --graph generates a *.gv file by default, instead of *.dot. *** Diagnostics overhaul Column numbers were wrong with multibyte characters, which would also result in skewed diagnostics with carets. Beside, because we were indenting the quoted source with a single space, lines with tab characters were incorrectly underlined. To address these issues, and to be clearer, Bison now issues diagnostics as GCC9 does. For instance it used to display (there's a tab before the opening brace): foo.y:3.37-38: error: $2 of ‘expr’ has no declared type expr: expr '+' "number" { $$ = $1 + $2; } ^~ It now reports foo.y:3.37-38: error: $2 of ‘expr’ has no declared type 3 | expr: expr '+' "number" { $$ = $1 + $2; } | ^~ Other constructs now also have better locations, resulting in more precise diagnostics. *** Fix-it hints for %empty Running Bison with -Wempty-rules and --update will remove incorrect %empty annotations, and add the missing ones. *** Generated reports The format of the reports (parse.output) was improved for readability. *** Better support for --no-line. When --no-line is used, the generated files are now cleaner: no lines are generated instead of empty lines. Together with using api.header.include, that should help people saving the generated files into version control systems get smaller diffs. ** Documentation A new example in C shows an simple infix calculator with a hand-written scanner (examples/c/calc). A new example in C shows a reentrant parser (capable of recursive calls) built with Flex and Bison (examples/c/reccalc). There is a new section about the history of Yaccs and Bison. ** Bug fixes A few obscure bugs were fixed, including the second oldest (known) bug in Bison: it was there when Bison was entered in the RCS version control system, in December 1987. See the NEWS of Bison 3.3 for the previous oldest bug.
Categories: FLOSS Project Planets

Srijan Technologies: Site Owner’s Guide to a Smooth Drupal 9 Upgrade Experience

Planet Drupal - Sun, 2019-05-19 02:39

While upgrading to the latest version is always part of the best practice, the process can be staggering.

Drupal 8.7 is already here and 9 will be released in a year, in June 2020.

Although a lot of discussion is happening around the upgrade and possibilities it brings along, the final product can only be as good as the process itself.

The good and important news is that moving from Drupal 8 to Drupal 9 should be really easy — radically easier than migrating from Drupal 7 to Drupal 8.

As a site owner, here’s what you need to know about the new release and what to take care of to make the process easier without many glitches.

Categories: FLOSS Project Planets

Polymorphism and Implicit Sharing

Planet KDE - Sun, 2019-05-19 02:29

Recently I have been researching into possibilities to make members of KoShape copy-on-write. At first glance, it seems enough to declare d-pointers as some subclass of QSharedDataPointer (see Qt’s implicit sharing) and then replace pointers with instances. However, there remain a number of problems to be solved, one of them being polymorphism.

polymorphism and value semantics

In the definition of KoShapePrivate class, the member fill is stored as a QSharedPointer:

QSharedPointer<KoShapeBackground> fill;

There are a number of subclasses of KoShapeBackground, including KoColorBackground, KoGradientBackground, to name just a few. We cannot store an instance of KoShapeBackground directly since we want polymorphism. But, well, making KoShapeBackground copy-on-write seems to have nothing to do with whether we store it as a pointer or instance. So let’s just put it here – I will come back to this question at the end of this post.

d-pointers and QSharedData

The KoShapeBackground heirarchy (similar to the KoShape one) uses derived d-pointersfor storing private data. To make things easier, I will here use a small example to elaborate on its use.

derived d-pointer1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
class AbstractPrivate
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
// it is not yet copy-constructable; we will come back to this later
// Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QScopedPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
// it is not yet copy-constructable
// Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

The main goal of making DerivedPrivate a subclass of AbstractPrivate is to avoid multiple d-pointers in the structure. Note that there are constructors taking a reference to the private data object. These are to make it possible for a Derived object to use the samed-pointer as its Abstract parent. The Q_D() macro is used to convert the d_ptr, which is a pointer to AbstractPrivate to another pointer, named d, of some of its descendent type; here, it is a DerivedPrivate. It is used together with the Q_DECLARE_PRIVATE() macro in the class definition and has a rather complicated implementation in the Qt headers. But for simplicity, it does not hurt for now to understand it as the following:

#define Q_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

where Class##Private means simply to append string Private to (the macro argument) Class.

Now let’s test it by creating a pointer to Abstract and give it a Derived object:

1
2
3
4
5
6
7
int main()
{
QScopedPointer<Abstract> ins(new Derived());
ins->foo();
ins->modifyVar();
ins->foo();
}

Output:

foo 0 0foo 1 1

Looks pretty viable – everything’s working well! – What if we use Qt’s implicit sharing? Just make AbstractPrivate a subclass of QSharedData and replace QScopedPointer with QSharedDataPointer.

making d-pointer QSharedDataPointer

In the last section, we commented out the copy constructors since QScopedPointer is not copy-constructable,but here QSharedDataPointer is copy-constructable, so we add them back:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

And testing the copy-on-write mechanism:

1
2
3
4
5
6
7
8
9
int main()
{
QScopedPointer<Derived> ins(new Derived());
QScopedPointer<Derived> ins2(new Derived(*ins));
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

But, eh, it’s a compile-time error.

error: reinterpret_cast from type &aposconst AbstractPrivate*&apos to type &aposAbstractPrivate*&apos casts away qualifiers Q_DECLARE_PRIVATE(Abstract)Q_D, revisited

So, where does the const removal come from? In qglobal.h, the code related to Q_D is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
template <typename T> inline T *qGetPtrHelper(T *ptr) { return ptr; }
template <typename Ptr> inline auto qGetPtrHelper(const Ptr &ptr) -> decltype(ptr.operator->()) { return ptr.operator->(); }

// The body must be a statement:
#define Q_CAST_IGNORE_ALIGN(body) QT_WARNING_PUSH QT_WARNING_DISABLE_GCC("-Wcast-align") body QT_WARNING_POP
#define Q_DECLARE_PRIVATE(Class) \
inline Class##Private* d_func() \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<Class##Private *>(qGetPtrHelper(d_ptr));) } \
inline const Class##Private* d_func() const \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<const Class##Private *>(qGetPtrHelper(d_ptr));) } \
friend class Class##Private;

#define Q_D(Class) Class##Private * const d = d_func()

It turns out that Q_D will call d_func() which then calls an overload of qGetPtrHelper() that takes const Ptr &ptr. What does ptr.operator->() return? What is the difference between QScopedPointer and QSharedDataPointer here?

QScopedPointer‘s operator->() is a const method that returns a non-const pointer to T; however, QSharedDataPointer has two operator->()s, one being const T* operator->() const, the other T* operator->(), and theyhave quite different behaviours – the non-const variant calls detach() (where copy-on-write is implemented), but the other one does not.

qGetPtrHelper() here can only take d_ptr as a const QSharedDataPointer, not a non-const one; so, no matter which d_func() we are calling, we can only get a const AbstractPrivate *. That is just the problem here.

To resolve this problem, let’s replace the Q_D macros with the ones we define ourselves:

#define CONST_SHARED_D(Class) const Class##Private *const d = reinterpret_cast<const Class##Private *>(d_ptr.constData())#define SHARED_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

We will then use SHARED_D(Class) in place of Q_D(Class) and CONST_SHARED_D(Class) for Q_D(const Class). Since the const and non-const variant really behaves differently, it should help to differentiate these two uses. Also, delete Q_DECLARE_PRIVATE since we do not need them any more:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { CONST_SHARED_D(Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { SHARED_D(Derived); d->var++; d->bar++; }
};

With the same main() code, what’s the result?

foo 0 0foo 1 16606417foo 0 0

… big whoops, what is that random thing there? Well, if we use dynamic_cast in place of reinterpret_cast, the program simply crashes after ins->modifyVar();, indicating that ins‘s d_ptr.data() is not at all a DerivedPrivate.

virtual clones

The detach() method of QSharedDataPointer will by default create an instance of AbstractPrivate regardless of what the instance really is. Fortunately, it is possible to change that behaviour through specifying the clone() method.

First, we need to make a virtual function in AbstractPrivate class:

virtual AbstractPrivate *clone() const = 0;

(make it pure virtual just to force all subclasses to re-implement it; if your base class is not abstract you probably want to implement the clone() method) and then override it in DerivedPrivate:

virtual DerivedPrivate *clone() const { return new DerivedPrivate(*this); }

Then, specify the template method for QSharedDataPointer::clone(). As we will re-use it multipletimes (for different base classes), it is better to define a macro:

1
2
3
4
5
6
7
#define DATA_CLONE_VIRTUAL(Class) template<> \
Class##Private *QSharedDataPointer<Class##Private>::clone() \
{ \
return d->clone(); \
}
// after the definition of Abstract
DATA_CLONE_VIRTUAL(Abstract)

It is not necessary to write DATA_CLONE_VIRTUAL(Derived) as we are never storing a QSharedDataPointer<DerivedPrivate> throughout the heirarchy.

Then test the code again:

foo 0 0foo 1 1foo 0 0

– Just as expected! It continues to work if we replace Derived with Abstract in QScopedPointer:

QScopedPointer<Abstract> ins(new Derived());QScopedPointer<Abstract> ins2(new Derived(* dynamic_cast<const Derived *>(ins.data())));

Well, another problem comes, that the constructor for ins2 seems too ugly, and messy. We could, like the private classes, implement a virtual function clone() for these kinds of things, but it is still not gentle enough, and we cannot use a default copy constructor for any class that contains such QScopedPointers.

What about QSharedPointer that is copy-constructable? Well, then these copies actually point to the same data structures and no copy-on-write is performed at all. This still not wanted.

the Descendents of …

Inspired by Sean Parent’s video, I finally come up with the following implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
template<typename T>
class Descendent
{
struct concept
{
virtual ~concept() = default;
virtual const T *ptr() const = 0;
virtual T *ptr() = 0;
virtual unique_ptr<concept> clone() const = 0;
};
template<typename U>
struct model : public concept
{
model(U x) : instance(move(x)) {}
const T *ptr() const { return &instance; }
T *ptr() { return &instance; }
// or unique_ptr<model<U> >(new model<U>(U(instance))) if you do not have C++14
unique_ptr<concept> clone() const { return make_unique<model<U> >(U(instance)); }
U instance;
};

unique_ptr<concept> m_d;
public:
template<typename U>
Descendent(U x) : m_d(make_unique<model<U> >(move(x))) {}

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}
Descendent(Descendent && that) : m_d(move(that.m_d)) {}

Descendent & operator=(const Descendent &that) { Descendent t(that); *this = move(t); return *this; }
Descendent & operator=(Descendent && that) { m_d = move(that.m_d); return *this; }

const T *data() const { return m_d->ptr(); }
const T *constData() const { return m_d->ptr(); }
T *data() { return m_d->ptr(); }
const T *operator->() const { return m_d->ptr(); }
T *operator->() { return m_d->ptr(); }
};

This class allows you to use Descendent<T> (read as “descendent of T“) to represent any instance of any subclass of T. It is copy-constructable, move-constructable, copy-assignable, and move-assignable.

Test code:

1
2
3
4
5
6
7
8
9
int main()
{
Descendent<Abstract> ins = Derived();
Descendent<Abstract> ins2 = ins;
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

It gives just the same results as before, but much neater and nicer – How does it work?

First we define a class concept. We put here what we want our instance to satisfy. We would like to access it as const and non-const, and to clone it as-is. Then we define a template class model<U> where U is a subclass of T, and implement these functionalities.

Next, we store a unique_ptr<concept>. The reason for not using QScopedPointer is QScopedPointer is not movable, but movability is a feature we actually will want (in sink arguments and return values).

Finally it’s just the constructor, moving and copying operations, and ways to access the wrapped object.

When Descendent<Abstract> ins2 = ins; is called, we will go through the copy constructor of Descendent:

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}

which will then call ins.m_d->clone(). But remember that ins.m_d actually contains a pointer to model<Derived>, whose clone() is return make_unique<model<Derived> >(Derived(instance));. This expression will call the copy constructor of Derived, then make a unique_ptr<model<Derived> >, which calls the constructor of model<Derived>:

model(Derived x) : instance(move(x)) {}

which move-constructs instance. Finally the unique_ptr<model<Derived> > is implicitly converted to unique_ptr<concept>, as per the conversion rule. “If T is a derived class of some base B, then std::unique_ptr<T> is implicitly convertible to std::unique_ptr<B>.”

And from now on, happy hacking — (.>w<.)

Categories: FLOSS Project Planets

KDE Usability & Productivity: Week 71

Planet KDE - Sun, 2019-05-19 02:01

Hot on the heels of last week, this week’s Usability & Productivity report continues to overflow with awesomeness. Quite a lot of work what you see featured here is already available to test out in the Plasma 5.16 beta, too! But why stop? Here’s more:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

Categories: FLOSS Project Planets

This Summer with Kdenlive

Planet KDE - Sat, 2019-05-18 20:00

Hi! I’m Akhil K Gangadharan and I’ve been selected for GSoC this year with Kdenlive. My project is titled ‘Revamping the Titler Tool’ and my work for this summer aims to kickoff the complete revamp of one of the major tools used in video-editing in Kdenlive, called the Titler tool.

Titler Tool?

The Titler tool is used to create, you guessed it, title clips. Title clips are clips that contain text and images that can be composited over videos.

The Titler tool

Why revamp it?

In Kdenlive, the titler tool is implemented using QGraphicsView which is considered deprecated since the release of Qt5. This makes it obviously prone to bugs that may appear in the upstream to affect the functionality of the tool. This has caused issues in the past, popular features like the Typewriter effect had to be dropped because of QGraphicsView which lead to uncontrollable crashes.

How?

Using QML.

Currently the Titler Tool uses QPainter, which paints every property and every animation is required to be programmed. QML allows creating powerful animations easily as QML as a language is designed for designing UI, which can be then rendered to create title clips as per our need.

Implementation details - a brief overview

For the summer, I intend to complete work on the backend implementation. The first step is to write and test a complete MLT producer module which can render QML frames. And then to begin test integration of this module with a new titler tool.

This is how the backend currently looks like -

After the revamp, the backend would look like -

After the backend is done with, we begin integrating it with Kdenlive and evolve the titler to use the new backend.

A great long challenge lies ahead, and I’m looking forward to this summer and beyond with the community to complete writing the tool - right from the backend to the new UI.

Finally, a big thanks to the Kdenlive community for getting me here and to my college student community, FOSS@Amrita for all the support and love!

Categories: FLOSS Project Planets

Python Software Foundation: Scott Shawcroft: History of CircuitPython

Planet Python - Sat, 2019-05-18 18:58


Scott Shawcroft is a freelance software engineer working full time for Adafruit, an open source hardware company that manufactures electronics that are easy to assemble and program. Shawcroft leads development of CircuitPython, a Python interpreter for small devices.

The presentation began with a demo of Adafruit’s Circuit Playground Express, a two-inch-wide circular board with a microcontroller, ten RGB lights, a USB port, and other components. Shawcroft connected the board to his laptop with a USB cable and it appeared as a regular USB drive with a source file called code.py. He edited the source file on his laptop to dim the brightness of the board’s lights. When he saved the file, the board automatically reloaded the code and the lights dimmed. “So that's super quick,” said Shawcroft. “I just did the demo in three minutes.”

Read more 2019 Python Language Summit coverage.

CircuitPython Is Optimized For Learning ElectronicsThe history of CircuitPython begins with MicroPython, a Python interpreter written from scratch for embedded systems by Damien George starting in 2013. Three years later, Adafruit hired Shawcroft to port MicroPython to the SAMD21 chip they use on many of their boards. Shawcroft’s top priority was serial and USB support for Adafruit’s boards, and then to implement communication with a variety of sensors. “The more hardware you can support externally,” he said, “the more projects people can build.”

As Shawcroft worked with MicroPython’s hardware APIs, he found them ill-fitting for Adafruit’s goals. MicroPython customizes its hardware APIs for each chip family to provide speed and flexibility for hardware experts. Adafruit’s audience, however, is first-time coders. Shawcroft said, “Our goal is to focus on the first five minutes someone has ever coded.”

To build a Python for Adafruit’s needs, Shawcroft forked MicroPython and created a new project, CircuitPython. In his Language Summit talk, he emphasized it is a “friendly fork”: both projects are MIT-licensed and share improvements in both directions. In contrast to MicroPython’s hardware APIs that vary by chip, CircuitPython has one hardware API, allowing Adafruit to write one set of libraries for them all.

MicroPython has a distinct standard library that differs from CPython’s: for example, its time functions are in a module named utime with a different feature set from the standard time module. It also ships modules with features not found in CPython’s standard library, such as advanced filesystem management features. In CircuitPython, Shawcroft removed the nonstandard features and modules. This change helps new coders ramp smoothly from CircuitPython on a microcontroller to CPython on a full-size computer, and it makes Adafruit’s libraries reusable on CPython itself.

Another motive for forking was to create a separate community for CircuitPython. In the original MicroPython project’s community, Shawcroft said, “There are great folks, and there's some not-so-great folks.” The CircuitPython community welcomes beginners, publishes documentation suitable for them, and maintains standards of conduct that are safe for minors.

Audience members were curious about CircuitPython’s support for Python 3.8 and beyond. When Damien George began MicroPython he targeted Python 3.4 compliance, which CircuitPython inherits. Shawcroft said that MicroPython has added some newer Python features, and decisions about more language features rest with Damien George.

Minimal Barrier To Entry

Photo courtesy of Adafruit.

Shawcroft aims to remove all roadblocks for beginners to be productive with CircuitPython. As he demonstrated, CircuitPython auto-reloads and runs code when the user saves it; there are two more user experience improvements in the latest release. First, serial output is shown on a connected display, so a program like print("hello world") will have visible output even before the coder learns how to control LEDs or other observable effects.

Second, error messages are now translated into nine languages, and Shawcroft encourages anyone with language skills to contribute more. Guido van Rossum and A. Jesse Jiryu Davis were excited to see these translations and suggested contributing them to CPython. Shawcroft noted that the existing translations are MIT-licensed and can be ported; however, the translations do not cover all the messages yet, and CircuitPython cannot show messages in non-Latin characters such as Chinese. Chinese fonts are several megabytes of characters, so the size alone presents an unsolved problem.

Later this year, Shawcroft will add Bluetooth support for coders to connect their phone or tablet to an Adafruit board and enjoy the same quick edit-refresh cycle there. Touchscreens will require a different sort of code editor, perhaps more like EduBlocks. Despite the challenges, Shawcroft echoed Russell Keith-Magee’s insistence on the value of mobile platforms: “My nieces, they have tablets and phones. They do not have laptops.”

Shawcroft’s sole request for the core developers was to keep new language features simple, with few special cases. First, because each new CPython feature must be reimplemented in MicroPython and CircuitPython, and special cases make this work thorny. Second, because complex logic translates into large code size, and the space for code on microcontrollers is minuscule.


Categories: FLOSS Project Planets

Evennia: Creating Evscaperoom, part 1

Planet Python - Sat, 2019-05-18 15:39
Over the last month (April-May 2019) I have taken part in the Mud Coder's Guild Game Jam "Enter the (Multi-User) Dungeon". This year the theme for the jam was One Room.

The result was Evscaperoom, an text-based multi-player "escape-room" written in Python using the Evennia MU* creation system. You can play it from that link in your browser or MU*-client of choice. If you are so inclined, you can also vote for it here in the jam (don't forget to check out the other entries while you're at it).

This little series of (likely two) dev-blog entries will try to recount the planning and technical aspects of the Evscaperoom. This is also for myself - I'd better write stuff down now while it's still fresh in my mind!

Inception 

When I first heard about the upcoming game-jam's theme of One Room, an 'escape room' was the first thing that came to mind, not the least because I just recently got to solve my my own first real-world escape-room as a gift on my birthday. 

If you are not familiar with escape-rooms, the premise is simple - you are locked into a room and have to figure out a way to get out of it by solving practical puzzles and finding hidden clues in the room. 

While you could create such a thing in your own bedroom (and there are also some one-use board game variants), most escape-rooms are managed by companies selling this as an experience for small groups. You usually have one hour to escape and if you get stuck you can press a button (or similar) to get a hint.

I thought making a computer escape-room. Not only can you do things in the computer that you cannot do in the real world, restricting the game to a single room limits so that it's conceivable to actually finish the damned thing in a month. 

A concern I had was that everyone else in the jam surely must have went for the same obvious idea. In the end that was not an issue at all though.


Basic premises
 
I was pretty confident that I would technically be able to create the game in time (not only is Python and Evennia perfect for this kind of fast experimentation and prototyping, I know the engine very well). But that's not enough; I had to first decide on how the thing should actually play. Here are the questions I had to consider:

Room State 

 An escape room can be seen as going through multiple states as puzzles are solved. For example, you may open a cabinet and that may open up new puzzles to solve. This is fine in a single-player game, but how to handle it in a multi-player environment?

My first thought was that each object may have multiple states and that players could co-exist in the same room, seeing different states at the same time. I really started planning for this. It would certainly be possible to implement.

But in the end I considered how a real-world escape-room works - people in the same room solves it together. For there to be any meaning with multi-player, they must share the room state.

So what I went with was a solution where players can create their own room or join an existing one. Each such room is generated on the fly (and filled with objects etc) and will change as players solve it. Once complete and/or everyone leaves, the room is deleted along with all objects in it. Clean and tidy.

So how to describe these states? I pictured that these would be described as normal Python modules with a start- and end function that initialized each state and cleaned it up when a new state was started. In the beginning I pictured these states as being pretty small (like one state to change one thing in the room). In the end though, the entire Evscaperoom fits in 12 state modules. I'll describe them in more detail in the second part of this post. 

Accessibility and "pixel-hunting" in text

When I first started writing descriptions I didn't always note which objects where interactive. It's a very simple and tempting puzzle to add - mention an object as part of a larger description and let the player figure out that it's something they can interact with. This practice is sort-of equivalent to pixel-hunting in graphical games - sweeping with the mouse across the screen until you find that little spot on the screen that you can do something with.

Problem is, pixel-hunting's not really fun. You easily get stuck and when you eventually find out what was blocking you, you don't really feel clever but only frustrated. So I decided that I should clearly mark every object that people could interact with and focus puzzles on better things.


In fact, in the end I made it an option:


As part of this I had to remind myself never to use colors only when marking important information: Visually impaired people with screen readers will simply miss that. Not to mention that some just disable colors in their clients.

So while I personally think option 2 above is the most visually pleasing, Evscaperoom defaults to the third option. It should should start everyone off on equal footing. Evennia has a screen-reader mode out of the box, but I moved it into the menu here for easy access.

Inventory and collaboration

In a puzzle-game, you often find objects and combine them with other things. Again, this is simple to do in a single-player game: Players just pick things up and use them later.

But in a multi-player game this offers a huge risk: players that pick up something important and then log off. The remaining players in that room would then be stuck in an unsolvable room - and it would be very hard for them to know this.

In principle you could try to 'clean' player inventories when they leave, but not only does it add complexity, there is another issue with players picking things up: It means that the person first to find/pick up the item is the only one that can use it and look at it. Others won't have access until the first player gives it up. Trusting that to anonymous players online is not a good idea.

So in the end I arrived at the following conclusions:
  • As soon as an item/resource is discovered, everyone in the room must be able to access it immediately.
  • There can be no inventory. Nothing can ever be picked up and tied to a specific player.
  • As soon as a discovery is made, this must be echoed to the entire room (it must not be up to the finder to announce what they found to everyone else).  
As a side-effect of this I also set a limit to the kind of puzzles I would allow:
  • No puzzles must require more than one player to solve. While one could indeed create some very cool puzzles where people collaborate, it's simply not feasible to do so with random strangers on the internet. At any moment the other guy may log off and leave you stuck. And that's if you even find someone logged in at the same time in the first place! The room should always be possible to solve solo, from beginning to end.

Focusing on objects

So without inventory system, how do you interact with objects? A trademark of any puzzle is using one object with another and also to explore things closer to find clues. I turned to graphical adventure games for inspiration:

Secret of Monkey Island ©1990 LucasArts. Image from old-games.com
A common way to operate on an object in traditional adventure games is to hover the mouse over it and then select the action you want to apply to it. In later (3D) games you might even zoom in of the object and rotate it around with your mouse to see if there are some clues to be had.

While Evennia and modern UI clients may allow you to use the mouse to select objects, I wanted this to work the traditional MUD-way, by inserting commands. So I decided that you as a player would be in one of two states:
  • The 'normal' state: When you use look you see the room description.
  • The 'focused' state: You focus on a specific object with the examine <target> command (aliases are ex or just e). Now object-specific actions become available to you. Use examine again to "un-focus". 

In the example above, the fireplace points out other objects you could also focus on, whereas the last parenthesis includes one or more "actions" that you can perform on the fireplace only when you have it focused. 

This ends up pretty different from most traditional MUD-style inputs. When I first released this to the public, I found people logged off after their first examine. It turned out that they couldn't figure out how to leave the focus mode. So they just assumed the thing was buggy and quit instead. Of course it's mentioned if you care to write help, but this is clearly one step too many for such an important UI concept. 

So I ended up adding the header above that always reminds you. And since then I've not seen any confusion over how the focus mode works.

For making it easy to focus on things, I also decided that each room would only ever have one object named a particular thing. So there is for example only one single object in the game named "key" that you can focus on. 

Communication

I wanted players to co-exist in the same room so that they could collaborate on solving it. This meant communication must be possible. I pictured people would want to point things out and talk to each other.

In my first round of revisions I had a truckload of individual emotes; you could

      point at target

 for example. In the end I just limited it to  

     say/shout/whisper <message>

and 

     emote <whatever>

And seeing what people actually use, this is more than enough (say alone is probably 99% of what people need, really). I had a notion that the shout/whisper could be used in a puzzle later but in the end I decided that communication commands should be strictly between players and not have anything to do with the puzzles.

I removed all other interaction: There is no fighting and without an inventory or requirement to collaborate on puzzles, there is no need for other interactions than to communicate.

First version you didn't even see what the others did, but eventually I added so that you at least saw what other players were focusing on at the moment (and of course if some major thing was solved/found).

In the end I don't even list characters as objects in the room (you have to use the who command to see who's in there with you).

The main help command output.Story

It's very common for this type of game to have a dangerous or scary theme. Things like "get out before the bomb explodes", "save the space ship before the engines overheat", "flee the axe murderer before he comes back" etc). I'm no stranger to dark themes, but for this I wanted something friendlier and brighter, maybe with a some dark undercurrents here and there.

My Jester character is someone I've not only depicted in art, but she's also an old RP character and literary protagonist of mine. Who else would find it funny to lock someone into a room only to provide crazy puzzles and hints for them to get out again? So my flimsy 'premise' was this: 


The village Jester wants to win the pie eating contest. You are one of her most dangerous opponents. She tricked you to her cabin and now you are locked in! If you don't get out in time, she'll get to eat all those pies on her own and surely win!
That's it - this became the premise from which the entire game flowed. I quickly decided that it to be a very "small-scale" story: no life-or-death situation, no saving of the world. The drama takes place in a small village with an "adversary" that doesn't really want to hurt you, but only to eat more pies than you.

From this, the way to offer hints came naturally - just eat a slice of "hintberry pie" the jester made (she even encourage you to eat it). It gives you a hint but is also very filling. So if you eat too much, how will you beat her in the contest later, even if you do get out?

To further the rustic and friendly tone I made sure the story took place on a warm summer day. Many descriptions describe sunshine, chirping birds and the smell of pie. I aimed at letting the text point out quirky and slightly comedic tone of the puzzles the Jester left behind. The player also sometimes gets teased by the game when doing things that does not make sense.

I won't go into the story further here - it's best if you experience it yourself. Let's just say that the village has some old secrets. And and the Jester has her own ways of doing things and of telling a story. The game has multiple endings and so far people have drawn very different conclusions in the end.

Scoring

Most often in escape rooms, final score is determined by the time and the number of hints used. I do keep the latter - for every pie you eat, you get a penalty on your final score.

As for time - this background story would fit very well with a time limit (get out in X time, after which the pie-eating contest will start!). But from experience with other online text-based games I decided against this. Not only should a player be able to take a break, they may also want to wait for a friend to leave and come back etc. 

But more importantly, I want players to explore and read all my carefully crafted descriptions! So I'd much rather prefer they take their time and reward them for being thorough. 

So in the end I give specific scores for actions throughout the game instead. Most points are for doing things that drive the story forward, such as using something or solving a puzzle. But a significant portion of the score comes from turning every stone and trying everything out. The nice side-effect of this is that even if you know exactly how to solve everything and rush through the game you will still not end up with a perfect score. 

The final score, adjusted by hints is then used to determine if you make it in time to the contest and how you fare. This means that if you explore carefully you have a "buffer" of points so eating a few pies may still land you a good result in the end.
 

First sketch

I really entered the game 'building' aspect with no real notion of how the Jester's cabin should look nor which puzzles should be in it. I tried to write things down beforehand but it didn't really work for me. 

So in the end I decided "let's just put a lot of interesting stuff in the room and then I'll figure out how they interact with each other". I'm sure this is different from game-maker to game-maker. But for me, this process worked perfectly. 


My first, very rough, sketch of the Jester's cabin

The above, first sketch ended up being what I used, although many of the objects mentioned never ended up in the final game and some things switched places. I did some other sketches too, but they'd be spoilers so I won't show them here ...


The actual game logic

The Evscaperoom principles outlined above deviate quite a bit from the traditional MU* style of game play. 

While Evennia provides everything for database management, in-game objects, commands, networking and other resources, the specifics of your game is something you need to make yourself - and you have the full power of Python to do it!

So for the first three days of the jam I used Evennia to build the custom game logic needed to provide the evscaperoom style of game play. I also made the tools I needed to quickly create the game content (which then took me the rest of the jam to make). 

In part 2 of this blog post I will cover the technical details of the Evscaperoom I built. I'll also go through some issues I ran into and conclusions I drew. I'll link to that from here when it's available!
Categories: FLOSS Project Planets

Pages