FLOSS Project Planets

KnackForge: How to update Drupal 8 core?

Planet Drupal - Sat, 2018-03-24 01:01
How to update Drupal 8 core?

Let's see how to update your Drupal site between 8.x.x minor and patch versions. For example, from 8.1.2 to 8.1.3, or from 8.3.5 to 8.4.0. I hope this will help you.

  • If you are upgrading to Drupal version x.y.z

           x -> is known as the major version number

           y -> is known as the minor version number

           z -> is known as the patch version number.

Sat, 03/24/2018 - 10:31
Categories: FLOSS Project Planets

Michal Čihař: Gammu 1.38.2

Planet Debian - 5 hours 11 min ago

Yesterday Gammu 1.38.2 has been released. This is bugfix release fixing for example USSD or MMS decoding in some situations.

The Windows binaries are available as well. These are built using AppVeyor and will help bring Windows users back to latest versions.

Full list of changes and new features can be found on Gammu 1.38.2 release page.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu | 0 comments

Categories: FLOSS Project Planets

Bryan Pendleton: Meanwhile, on the Internet...

Planet Apache - Tue, 2017-03-28 23:14

... when will I possibly find the time to study all this?

  • Research DebtThe insidious thing about research debt is that it’s normal. Everyone takes it for granted, and doesn’t realize that things could be different. For example, it’s normal to give very mediocre explanations of research, and people perceive that to be the ceiling of explanation quality. On the rare occasions that truly excellent explanations come along, people see them as one-off miracles rather than a sign that we could systematically be doing better.
  • Operating System: From 0 to 1This book helps you gain the foundational knowledge required to write an operating system from scratch. Hence the title, 0 to 1.

    After completing this book, at the very least you will learn:

    • How to write an operating system from scratch by reading hardware datasheets. In the real world, it works like that. You won’t be able to consult Google for a quick answer.
    • A big picture of how each layer of a computer is related to the other, from hardware to software.
    • Write code independently. It’s pointless to copy and paste code. Real learning happens when you solve problems on your own. Some examples are given to kick start, but most problems are yours to conquer. However, the solutions are available online for you to examine after giving it a good try.
    • Linux as a development environment and how to use common tools for low-level programming.
    • x86 assembly in-depth.
    • How a program is structured so that an operating system can run.
    • How to debug a program running directly on hardware with gdb and QEMU.
    • Linking and loading on bare metal x86_64, with pure C. No standard library. No runtime overhead.
  • The System Design PrimerLearning how to design scalable systems will help you become a better engineer.

    System design is a broad topic. There is a vast amount of resources scattered throughout the web on system design principles.

    This repo is an organized collection of resources to help you learn how to build systems at scale.

  • Calling Bullshit in the Age of Big DataOur learning objectives are straightforward. After taking the course, you should be able to:
    • Remain vigilant for bullshit contaminating your information diet.
    • Recognize said bullshit whenever and wherever you encounter it.
    • Figure out for yourself precisely why a particular bit of bullshit is bullshit.
    • Provide a statistician or fellow scientist with a technical explanation of why a claim is bullshit.
    • Provide your crystals-and-homeopathy aunt or casually racist uncle with an accessible and persuasive explanation of why a claim is bullshit.
    We will be astonished if these skills do not turn out to be among the most useful and most broadly applicable of those that you acquire during the course of your college education.
  • The Myers diff algorithm: part 1In this series of articles, I’d like to walk you through the default diff algorithm used by Git. It was developed by Eugene W. Myers, and the original paper is available online. While the paper is quite short, it is quite mathematically dense and is focussed on proving that it works. The explanations here will be less rigorous, but will hopefully be more intuitive, giving a detailed walk-through of what the algorithm actually does and how it works.
Categories: FLOSS Project Planets

Bryan Pendleton: The end of SHA-1, one month later

Planet Apache - Tue, 2017-03-28 22:53

As everyone already knows, the SHA-1 cryptographic hash function has been Shattered.

This is no particular suprise; as Bruce Schneier pointed out on his blog nearly five years ago, cryptography experts were well aware of the vulnerability of the SHA-1 cryptography. Schneier quoted Jesse Walker as saying:

A collision attack is therefore well within the range of what an organized crime syndicate can practically budget by 2018, and a university research project by 2021.

Pretty good estimate, I'd say, Mr. Walker.

But what does this mean, in practice?

Perhaps the most visible impact is in the area of network security, where Google has been warning about problems for quite some time, and started putting those warnings into action last fall: SHA-1 Certificates in Chrome

To protect users from such attacks, Chrome will stop trusting certificates that use the SHA-1 algorithm, and visiting a site using such a certificate will result in an interstitial warning.

Other large internet sites have followed suit; kudos to them for doing so quickly and responsibly.

Another very interesting aspect of this signature collision arises in what are known as "content-addressible file systems", of which git is the best known. This is a very significant issue, as the Shattered web site points out:

It is essentially possible to create two GIT repositories with the same head commit hash and different contents, say a benign source code and a backdoored one. An attacker could potentially selectively serve either repository to targeted users. This will require attackers to compute their own collision.

And it doesn't just affect git; subversion is vulnerable, as is Mercurial.

People are right to be worried about this.

However, when it comes to the SCM issue, I think that the issue isn't completely cut-and-dried, for several reasons:

  • Firstly, we're talking about an issue in which an attacker deliberately constructs a collision, as opposed to an accidental collision. The use of SHA-1 identifiers for git objects remains a useful, practical, and trouble-free technique for allowing people to collaborate independently on common computer files without sharing a central server (the so-called DVCS paradigm). In the 12 years that git has been in use, and the trillions of git object SHAs that have been computed, nobody anywhere in the world has reported an accidental collision in practice.
  • This strength of accidental collision detection is strengthened by the fact that git encodes certain other information into the computed SHA-1 value besides just the file's content: namely, the object type (blob/tree/commit/tag), and the object length, for blob shas, and other ancillary data such as timestamps, etc. for commit shas. I'm not saying this makes git any safer from a security point of view; after all Google arranged to have their two colliding PDF files be both exactly 422,435 bytes long. But it does mean that the accidental collision risk is clearly quite small.
  • And, of course, for the attacker to actually supplant "a benign source code" with "a backdoored one," not only does the attacker have to construct the alternate file (of identical length and identical SHA-1, but with evil content), but that backdoored file has to still be valid source code. It is no easy task to add in this additional constraint, even if you are the wealthy-enough attacker to be willing to spend "9,223,372,036,854,775,808 SHA1 computations". I'd imagine that this task gets easier, somewhat, as the size of that source file gets larger; that is, given that a certain amount of the backdoored evil source file is necessarily consumed by the source code of the evil payload itself, the attacker is forced to use the remainder of the file size for containing the rubbish that is necessary to make the SHA-1 values line up, and the smaller that remainder is, the harder it will be to generate that matching SHA-1, right? So it's one more reason to keep your individual source files small?

The above was too many words: what I'm trying to point out is:

With SSH, people use SHA-1 to provide security

With git/Mercurial, people use SHA-1 to provide decentralized object identification workflows, for easier collaboration among trusted teams.

The crucial difference between the use of SHA-1 values in validating network security certificates, versus the use of those values in assigning source code file identifiers, involves the different ways that humans use these two systems.

That is, when you connect to a valuable web site using SSH, you are depending on that SSH signature to establish trust in your mind between yourself and some remote network entity.

But when you share source code with your team, with whom you are collaborating using a tool like Mercurial, Subversion, or git, there are, crucially, other trust relationships in effect between you and the other human beings with whom you are a collaborator.

So, yes, be careful from whom you download a git repo full of source code that you intend to compile and run on your computer.

But wasn't that already true, long before SHA-1 was broken?

Categories: FLOSS Project Planets

Community Over Code: Vote Counting At The Apache Member’s Meeting!

Planet Apache - Tue, 2017-03-28 21:04

The ASF is holding it’s annual Member’s meeting now, where Members get to elect a new board as well as elect new individual Members to the Foundation.  We do this by holding a live IRC meeting on a Tuesday, then we vote with secure email ballots asynchronously during the recess, then reconvene on Thursday to announce results.  But how does the meeting really work?

TL;DR: It’s all explained in the README.txt file that was emailed to every Member.  And if anyone else wants to read all the details of how we tabulate votes, it’s all documented!

But I hear you cry: yes, but what does STV mean?

When voting for the board using single transferable votes (STV) the order of your votes is crazy important.  OK, very important.  But you really want to think about the order, especially who you place in the first two votes (at the top of the Apache STeVe voting target).

STV allows you to vote for all the people you’d like to see on the board, and express your preference for who you’d like to see the most, who else you might like, and so on.  It also allows you to leave off candidates you do not want to vote for.   STV does this by counting everyone’s votes for all candidates in several (many, sometimes) rounds of comparisons, that starts at everyone’s first place vote, and only looks at second, third, and so on votes if needed.

This video on STV does a good job of explaining how votes are reallocated in each round.  What’s important to remember is that every voter has expressed a list of candidates in priority order: first choice, second choice, etc.  Some voters only vote for a single candidate; many voters vote for up to 9 candidates (how many seats on the board).  Rarely a voter will vote for more than 9 candidates if they like more people for the board.

STV Goes Round And Round

STV collects everyone’s first place vote and then sees if any candidate(s) clearly have enough first place votes (only) to win in the first round.  If there are 9 seats, any candidate that wins over 11% threshold (100/9) of the vote just from first place votes is elected immediately.  In most elections I’ve seen, that ends up selecting one, two, or sometimes three candidates in the first round.  STV continues counting votes in a several rounds until all seats are filled.

STV saves how many votes are allocated to each other candidate in the first round – winners and not-yet-winners. Importantly, it then re-allocates any extra votes over the 11% for those winning candidates to other candidates – by checking for what the second place choice was for each voter who’s first place vote was for the winning candidate. It then re-allocates a proportion of those votes to whoever voters chose in their second place votes.  This adds more votes to many of the not-yet-winners, which get carried forward to the next round.

 

STV also eliminates a candidate who has the least number of first-place votes.  In the video example (and many real-life cases), this is a candidate who clearly does not have enough strong support from voters to get a seat.  All the votes from this now-losing candidate are also re-allocated proportionally to whoever those voters had chosen in second place.

STV then starts a new round after reallocating those votes.  Any remaining candidates now who have more than the 11% threshold are elected.  Any extra votes over the 11% are then reallocated to other candidates by going to the next preference choice of those voters votes.  Any obvious lowest vote counted candidate is also eliminated, and votes are redistributed.

STV continues rounds like this until all seats are filled with candidates.  Note that the above description is general; the specific STV algorithm (Meek’s) in the code determines exactly how the reallocations and rounds work.  Some elections take dozens of rounds to fully decide the last few seats.

Oversight And The Code

The ASF uses the Apache STeVe project’s code to run our own elections.  A set of volunteer vote monitors prepare the ballot issues and data for the STeVe website tool ahead of time.  All votes are strongly tied to a specific Apache member’s ID by logging into the secure voting server.  Every vote also results in an email to the owner of that vote ID (but not including the vote itself).  Voters can vote on any issue (the board election, or new member elections); all votes are recorded securely, but only the last timestamped vote is counted.

All votes and changes to the running voter server are logged and emailed to all vote monitors, so there is proper oversight.  Various signatures on the data in the voter system make sure that if any underlying data is tampered with directly, that the software recording the votes will note the error and any of the monitors will be able to see that.  Vote monitors are not able to see who voted for what – merely that each vote came from a valid voter ID.

Thanks to a few volunteers who have stepped up to improve the Apache STeVe voting tool and it’s web interface, our annual meeting voting is simple for the many Members who are eligible to vote for the board.  And compared to the early years of the ASF – where we used paper ballots! – this is much simpler!

We also elect a set of nominated member candidates at each meeting; new member candidates use a simple Yes/No/Abstain method within Apache STeVe as well.

Note that director election results are published shortly after the meeting since once the election is done, we have a new board!  New member elections are not announced until 30 days after the meeting, which gives us time to invite the new members privately, and confirms that they accept and will sign the membership application.

What Happens Next?

The monthly board meetings continue on the usual cycle.  Traditionally, the new board will work together for a couple of months before making any changes to executive officer appointments or other policies.  Depending on the volunteers willing to step up, in some years we have left all executive officers as-is; in a few years, the board has made a number of changes – usually, when the existing volunteers in those roles have asked to be replaced, often due to needing their volunteer time back for the rest of life.

In any case, all Members are still able to read and contribute to governance within the ASF if they wish to step up and help!

The post Vote Counting At The Apache Member’s Meeting! appeared first on Community Over Code.

Categories: FLOSS Project Planets

Python Software Foundation: Python at Google Summer of Code: Apply by April 3

Planet Python - Tue, 2017-03-28 19:39


Google Summer of Code (GSoC) is a global program that offers post-secondary students an opportunity to be paid for contributing to an open source project over a three-month period. Since 2005, the Python Software Foundation (PSF) has served as an "umbrella organization" to a variety of Python-related projects, as well as sponsoring projects related to the development of the Python language.
April 3rd is the last date for student applications for GSoC 2017. You can view all the sub-orgs under PSF and see what projects are seeking applications, then go to the Google Summer of Code site to submit your application.

Questions?

To ask questions about specific projects, go to the sub-orgs page and click "Contact" under the project you want to ask about.

The student application deadline is April 3, and decisions will be announced on May 4.


Timeline for Google Summer of Code 2017
Categories: FLOSS Project Planets

Keith Packard: DRM-lease

Planet Debian - Tue, 2017-03-28 18:22
DRM display resource leasing (kernel side)

So, you've got a fine head-mounted display and want to explore the delights of virtual reality. Right now, on Linux, that means getting the window system to cooperate because the window system is the DRM master and holds sole access to all display resources. So, you plug in your device, play with RandR to get it displaying bits from the window system and then carefully configure your VR application to use the whole monitor area and hope that the desktop will actually grant you the boon of page flipping so that you will get reasonable performance and maybe not even experience tearing. Results so far have been mixed, and depend on a lot of pieces working in ways that aren't exactly how they were designed to work.

We could just hack up the window system(s) and try to let applications reserve the HMD monitors and somehow removing them from the normal display area so that other applications don't randomly pop up in the middle of the screen. That would probably work, and would take advantage of much of the existing window system infrastructure for setting video modes and performing page flips. However, we've got a pretty spiffy standard API in the kernel for both of those, and getting the window system entirely out of the way seems like something worth trying.

I spent a few hours in Hobart chatting with Dave Airlie during LCA and discussed how this might actually work.

Goals
  1. Use KMS interfaces directly from the VR application to drive presentation to the HMD.

  2. Make sure the window system clients never see the HMD as a connected monitor.

  3. Maybe let logind (or other service) manage the KMS resources and hand them out to the window system and VR applications.

Limitations
  1. Don't make KMS resources appear and disappear. It turns out applications get confused when the set of available CRTCs, connectors and encoders changes at runtime.
An Outline for Multiple DRM masters

By the end of our meeting in Hobart, Dave had sketched out a fairly simple set of ideas with me. We'd add support in the kernel to create additional DRM masters. Then, we'd make it possible to 'hide' enough state about the various DRM resources so that each DRM master would automagically use disjoint subsets of resources. In particular, we would.

  1. Pretend that connectors were always disconnected

  2. Mask off crtc and encoder bits so that some of them just didn't seem very useful.

  3. Block access to resources controlled by other DRM masters, just in case someone tried to do the wrong thing.

Refinement with Eric over Swedish Pancakes

A couple of weeks ago, Eric Anholt and I had breakfast at the original pancake house and chatted a bit about this stuff. He suggested that the right interface for controlling these new DRM masters was through the existing DRM master interface, and that we could add new ioctls that the current DRM master could invoke to create and manage them.

Leasing as a Model

I spent some time just thinking about how this might work and came up with a pretty simple metaphor for these new DRM masters. The original DRM master on each VT "owns" the output resources and has final say over their use. However, a DRM master can create another DRM master and "lease" resources it has control over to the new DRM master. Once leased, resources cannot be controlled by the owner unless the owner cancels the lease, or the new DRM master is closed. Here's some terminology:

DRM Master
Any DRM file which can perform mode setting.
Owner
The original DRM Master, created by opening /dev/dri/card*
Lessor
A DRM master which has leased out resources to one or more other DRM masters.
Lessee
A DRM master which controls resources leased from another DRM master. Each Lessee leases resources from a single Lessor.
Lessee ID
An integer which uniquely identifies a lessee within the tree of DRM masters descending from a single Owner.
Lease
The contract between the Lessor and Lessee which identifies which resources which may be controlled by the Lessee. All of the resources must be owned by or leased to the Lessor.

With Eric's input, the interface to create a lease was pretty simple to write down:

int drmModeCreateLease(int fd, const uint32_t *objects, int num_objects, int flags, uint32_t *lessee_id);

Given an FD to a DRM master, and a list of objects to lease, a new DRM master FD is returned that holds a lease to those objects. 'flags' can be any combination of O_CLOEXEC and O_NONBLOCK for the newly minted file descriptor.

Of course, the owner might want to take some resources back, or even grant new resources to the lessee. So, I added an interface that rewrites the terms of the lease with a new set of objects:

int drmModeChangeLease(int fd, uint32_t lessee_id, const uint32_t *objects, int num_objects);

Note that nothing here makes any promises about the state of the objects across changes in the lease status; the lessor and lessee are expected to perform whatever modesetting is required for the objects to be useful to them.

Window System Integration

There are two ways to integrate DRM leases into the window system environment:

  1. Have logind "lease" most resources to the window system. When a HMD is connected, it would lease out suitable resources to the VR environment.

  2. Have the window system "own" all of the resources and then add window system interfaces to create new DRM masters leased from its DRM master.

I'll probably go ahead and do 2. in X and see what that looks like.

One trick with any of this will be to hide HMDs from any RandR clients listening in on the window system. You probably don't want the window system to tell the desktop that a new monitor has been connected, have it start reconfiguring things, and then have your VR application create a new DRM master, making the HMD appear to have disconnected to the window system and have that go reconfigure things all over again.

I'm not sure how this might work, but perhaps having the VR application register something like a passive grab on hot plug events might make sense? Essentially, you want it to hear about monitor connect events, go look to see if the new monitor is one it wants, and if not, release that to other X clients for their use. This can be done in stages, with the ability to create a new DRM master over X done first, and then cleaning up the hotplug stuff later on.

Current Status

I hacked up the kernel to support the drmModeCreateLease API, and then hacked up kmscube to run two threads with different sets of KMS resources. That ran for nearly a minute before crashing and requiring a reboot. I think there may be some locking issues with page flips from two threads to the same device.

I think I also made the wrong decision about how to handle lessors closing down. I tried to let the lessors get deleted and then 'orphan' the lessees. I've rewritten that so that lessees hold a reference on their lessor, keeping the lessor in place until the lessee shuts down. I've also written the kernel parts of the drmModeChangeLease support.

Questions
  • What should happen when a Lessor is closed? Should all access to controlled resources be revoked from all descendant Lessees?

    Proposed answer -- lessees hold a reference to their lessor so that the entire tree remains in place. A Lessor can clean up before exiting by revoking lessee access if it chooses.

  • How about when a Lessee is closed? Should the Lessor be notified in some way?

  • CRTCs and Encoders have properties. Should these properties be automatically included in the lease?

    Proposed answer -- no, userspace is responsible for constructing the entire lease.

Categories: FLOSS Project Planets

DSPIllustrations.com: The Sound of Harmonics - Approximating instrument sounds with Fourier Series

Planet Python - Tue, 2017-03-28 17:50
The Sound of Harmonics - Approximating instruments with Fourier Series

In a previous article about the Fourier Series calculation we have illustrated how different numbers of harmonics approximate artificial periodic functions. The present post applies the results to the analysis of instrument sounds, namely sounds of a saxophone.

When playing a stationary tone on a saxophone, we hear a constant sound. Hence, we can assume its waveform is periodic, since we could start to listen to the tone at any time and would still hear the same tone. So, the waveform needs to repeat itself over and over again. In this case, it should be possible to expand the waveform into sines and cosines of harmonic frequencies and reconstruct the original signal from them.

We want to verify this is assumption with this post. Let us start with functions to calculate the Fourier series, fourierSeries and for reconstructing a signal from its Fourier series coefficients, reconstruct.

def fourierSeries(period, N): """Calculate the Fourier series coefficients up to the Nth harmonic""" result = [] ...
Categories: FLOSS Project Planets

New Emoji and… busy time!!

Planet KDE - Tue, 2017-03-28 16:56

WARNING SHORT POST

Really busy time….
I really miss the time I spent on Inkscape. I hope I can find some time to go back to drawing.

p.s. do you like this emoji?

Cry Emoji


Categories: FLOSS Project Planets

Sylvain Beucler: Practical basics of reproducible builds 2

Planet Debian - Tue, 2017-03-28 15:46

Let's review what we learned so far:

  • compiler version need to be identical and recorded
  • build options and their order needs to be identical and recorder
  • build path needs to be identical and recorded
    (otherwise debug symbols - and BuildIDs - change)
  • diffoscope helps checking for differences in build output

We stopped when compiling a PE .exe produced a varying output.
It turns out that PE carries a build date timestamp.

The spec says that bound DLLs timestamps are refered to in the "Delay-Load Directory Table". Maybe that's also the date Windows displays when a system-wide DLL is about to be replaced, too.
Build timestamps looks unused in .exe files though.

Anyway, Stephen Kitt pointed out (thanks!) that Debian's MinGW linker binutils-mingw-w64 has an upstream-pending patch that sets the timestamp to SOURCE_DATE_EPOCH if set.

Alternatively, one can pass -Wl,--no-insert-timestamp to set it to 0 (though see caveats below):

$ i686-w64-mingw32.static-gcc -Wl,--no-insert-timestamp hello.c -o hello.exe $ md5sum hello.exe 298f98d74e6e913628a8b74514eddcb2 hello.exe $ /opt/mxe/usr/bin/i686-w64-mingw32.static-gcc -Wl,--no-insert-timestamp hello.c -o hello.exe $ md5sum hello.exe 298f98d74e6e913628a8b74514eddcb2 hello.exe

If we don't care about debug symbols, unlike with ELF, stripped PE binaries look stable too!

$ cd repro/ $ i686-w64-mingw32.static-gcc hello.c -o hello.exe && i686-w64-mingw32.static-strip hello.exe $ md5sum hello.exe 6e07736bf8a59e5397c16e799699168d hello.exe $ i686-w64-mingw32.static-gcc hello.c -o hello.exe && i686-w64-mingw32.static-strip hello.exe $ md5sum hello.exe 6e07736bf8a59e5397c16e799699168d hello.exe $ cd .. $ cp -a repro repro2/ $ cd repro2/ $ i686-w64-mingw32.static-gcc hello.c -o hello.exe && i686-w64-mingw32.static-strip hello.exe $ md5sum hello.exe 6e07736bf8a59e5397c16e799699168d hello.exe

Now that we have the main executable covered, what about the dependencies?
Let's see how well MXE compiles SDL2:

$ cd /opt/mxe/ $ cp -a ./usr/i686-w64-mingw32.static/lib/libSDL2.a /tmp $ rm -rf * && git checkout . $ make sdl2 $ md5sum ./usr/i686-w64-mingw32.static/lib/libSDL2.a /tmp/libSDL2.a 68909ab13181b1283bd1970a56d41482 ./usr/i686-w64-mingw32.static/lib/libSDL2.a 68909ab13181b1283bd1970a56d41482 /tmp/libSDL2.a

Neat - what about another build directory?

$ cd /usr/srx/mxe $ make sdl2 $ md5sum usr/i686-w64-mingw32.static/lib/libSDL2.a /tmp/libSDL2.a c6c368323927e2ae7adab7ee2a7223e9 usr/i686-w64-mingw32.static/lib/libSDL2.a 68909ab13181b1283bd1970a56d41482 /tmp/libSDL2.a $ ls -l ./usr/i686-w64-mingw32.static/lib/libSDL2.a /tmp/libSDL2.a -rw-r--r-- 1 me me 5861536 mars 23 21:04 /tmp/libSDL2.a -rw-r--r-- 1 me me 5862488 mars 25 19:46 ./usr/i686-w64-mingw32.static/lib/libSDL2.a

Well that was expected.
But what about the filesystem order?
With such an automated build, could potential variations in the order of files go undetected?
Would the output be different on another filesystem format (ext4 vs. btrfs...)?

It was a good opportunity to test the disorderfs fuse-based tool.
And while I'm at it, check if reprotest is easy enough to use (the manpage is scary).
Let's redo our basic tests with it - basic usage is actually very simple:

$ apt-get install reprotest disorderfs faketime $ reprotest 'make hello' 'hello' ... will vary: environment will vary: fileordering will vary: home will vary: kernel will vary: locales will vary: exec_path will vary: time will vary: timezone will vary: umask ... --- /tmp/tmpk5uipdle/control_artifact/ +++ /tmp/tmpk5uipdle/experiment_artifact/ │ --- /tmp/tmpk5uipdle/control_artifact/hello ├── +++ /tmp/tmpk5uipdle/experiment_artifact/hello ├── stat {} │ │ @@ -1,8 +1,8 @@ │ │ │ │ Size: 8632 Blocks: 24 IO Block: 4096 regular file │ │ Links: 1 │ │ -Access: (0755/-rwxr-xr-x) Uid: ( 1000/ me) Gid: ( 1000/ me) │ │ +Access: (0775/-rwxrwxr-x) Uid: ( 1000/ me) Gid: ( 1000/ me) │ │ │ │ Modify: 1970-01-01 00:00:00.000000000 +0000 │ │ │ │ Birth: - # => OK except for permissions $ reprotest 'make hello && chmod 755 hello' 'hello' ======================= Reproduction successful ======================= No differences in hello c8f63b73265e69ab3b9d44dcee0ef1d2815cdf71df3c59635a2770e21cf462ec hello $ reprotest 'make hello CFLAGS="-g -O2"' 'hello' # => lots of differences, as expected

Now let's apply to the MXE build.
We keep the same build path, and also avoid using linux32 (because MXE would then recompile all the host compiler tools for 32-bit):

$ reprotest --dont-vary build_path,kernel 'touch src/sdl2.mk && make sdl2 && cp -a usr/i686-w64-mingw32.static/lib/libSDL2.a .' 'libSDL2.a' ======================= Reproduction successful ======================= No differences in libSDL2.a d9a39785fbeee5a3ac278be489ac7bf3b99b5f1f7f3e27ebf3f8c60fe25086b5 libSDL2.a

That checks!
What about a full MXE environment?

$ reprotest --dont-vary build_path,kernel 'make clean && make sdl2 sdl2_gfx sdl2_image sdl2_mixer sdl2_ttf libzip gettext nsis' 'usr' # => changes in installation dates # => timestamps in .exe files (dbus, ...) # => libicu doesn't look reproducible (derb.exe, genbrk.exe, genccode.exe...) # => apparently ar timestamp variations in libaclui

Most libraries look reproducible enough.
ar differences may go away at FreeDink link time since I'm aiming at a static build. Let's try!

First let's see how FreeDink behaves with stable dependencies.
We can compile with -Wl,--no-insert-timestamp and strip the binaries in a first step.
There are various issues (timestamps, permissions) but first let's check the executables themselves:

$ cd freedink/ $ reprotest --dont-vary build_path 'mkdir cross-woe-32/ && cd cross-woe-32/ && export PATH=/opt/mxe/usr/bin:$PATH && LDFLAGS='-Wl,--no-insert-timestamp' ../configure --host=i686-w64-mingw32.static --enable-static && make -j$(nproc) && make install-strip DESTDIR=$(pwd)/destdir' 'cross-woe-32/destdir/usr/local/bin' # => executables are identical! # Same again, just to make sure $ reprotest --dont-vary build_path 'mkdir cross-woe-32/ && cd cross-woe-32/ && export PATH=/opt/mxe/usr/bin:$PATH && LDFLAGS='-Wl,--no-insert-timestamp' ../configure --host=i686-w64-mingw32.static --enable-static && make -j$(nproc) && make install-strip DESTDIR=$(pwd)/destdir' 'cross-woe-32/destdir/usr/local/bin' │ --- /tmp/tmp2yw0sn4_/control_artifact/bin/freedink.exe ├── +++ /tmp/tmp2yw0sn4_/experiment_artifact/bin/freedink.exe │ │ @@ -2,20 +2,20 @@ │ │ 00000010: b800 0000 0000 0000 4000 0000 0000 0000 ........@....... │ │ 00000020: 0000 0000 0000 0000 0000 0000 0000 0000 ................ │ │ 00000030: 0000 0000 0000 0000 0000 0000 8000 0000 ................ │ │ 00000040: 0e1f ba0e 00b4 09cd 21b8 014c cd21 5468 ........!..L.!Th │ │ 00000050: 6973 2070 726f 6772 616d 2063 616e 6e6f is program canno │ │ 00000060: 7420 6265 2072 756e 2069 6e20 444f 5320 t be run in DOS │ │ 00000070: 6d6f 6465 2e0d 0d0a 2400 0000 0000 0000 mode....$....... │ │ -00000080: 5045 0000 4c01 0a00 e534 0735 0000 0000 PE..L....4.5.... │ │ +00000080: 5045 0000 4c01 0a00 0000 0000 0000 0000 PE..L........... │ │ 00000090: 0000 0000 e000 0e03 0b01 0219 00f2 3400 ..............4. │ │ 000000a0: 0022 4e00 0050 3b00 c014 0000 0010 0000 ."N..P;......... │ │ 000000b0: 0010 3500 0000 4000 0010 0000 0002 0000 ..5...@......... │ │ 000000c0: 0400 0000 0100 0000 0400 0000 0000 0000 ................ │ │ -000000d0: 00e0 8900 0004 0000 7662 4e00 0200 0000 ........vbN..... │ │ +000000d0: 00e0 8900 0004 0000 89f8 4e00 0200 0000 ..........N..... │ │ 000000e0: 0000 2000 0010 0000 0000 1000 0010 0000 .. ............. │ │ 000000f0: 0000 0000 1000 0000 00a0 8700 b552 0000 .............R.. │ │ 00000100: 0000 8800 d02d 0000 0050 8800 5006 0000 .....-...P..P... │ │ 00000110: 0000 0000 0000 0000 0000 0000 0000 0000 ................ │ │ 00000120: 0060 8800 4477 0100 0000 0000 0000 0000 .`..Dw.......... │ │ 00000130: 0000 0000 0000 0000 0000 0000 0000 0000 ................ │ │ 00000140: 0440 8800 1800 0000 0000 0000 0000 0000 .@.............. ├── stat {} │ │ │ @@ -1,8 +1,8 @@ │ │ │ │ │ │ Size: 5121536 Blocks: 10008 IO Block: 4096 regular file │ │ │ Links: 1 │ │ │ Access: (0755/-rwxr-xr-x) Uid: ( 1000/ me) Gid: ( 1000/ me) │ │ │ │ │ │ -Modify: 2017-03-26 01:26:35.233841833 +0000 │ │ │ +Modify: 2017-03-26 01:27:01.829592505 +0000 │ │ │ │ │ │ Birth: -

Gah...
AFAIU there is something random in the linking phase, and sometimes the timestamp is removed, sometimes it's not.
Not very easy to track but I believe I reproduced it with the "hello" example:

# With MXE: $ reprotest 'i686-w64-mingw32.static-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -Dmain=SDL_main -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello' 'hello' # => different # => maybe because it imports the build timestamp from -lSDL2main # With Debian's MinGW (but without SOURCE_DATE_EPOCH): $ reprotest 'i686-w64-mingw32-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -Dmain=SDL_main -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello' 'hello' ======================= Reproduction successful ======================= No differences in hello 0b2d99dc51e2ad68ad040d90405ed953a006c6e58599beb304f0c2164c7b83a2 hello # Let's remove -Dmain=SDL_main and let our main() have precedence over the one in -lSDL2main: $ reprotest 'i686-w64-mingw32.static-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello' 'hello' ======================= Reproduction successful ======================= No differences in hello 6c05f75eec1904d58be222cc83055d078b4c3be8b7f185c7d3a08b9a83a2ef8d hello $ LANG=C i686-w64-mingw32.static-ld --version # MXE GNU ld (GNU Binutils) 2.25.1 Copyright (C) 2014 Free Software Foundation, Inc. $ LANG=C i686-w64-mingw32-ld --version # Debian GNU ld (GNU Binutils) 2.27.90.20161231 Copyright (C) 2016 Free Software Foundation, Inc.

It looks like there is a random behavior in binutils 2.25, coupled with SDL2's wrapping of my main().

So FreeDink is nearly reproducible, except for this build timestamp issue that pops up in all kind of situations. In the worse case I can zero it out, or patch MXE's binutils until they upgrade.

More importantly, what if I recompile FreeDink and the dependencies twice?

$ (cd /opt/mxe/ && make clean && make sdl2 sdl2_gfx sdl2_image sdl2_mixer sdl2_ttf glm libzip gettext nsis) $ (mkdir cross-woe-32/ && cd cross-woe-32/ \ && export PATH=/opt/mxe/usr/bin:$PATH \ && LDFLAGS="-Wl,--no-insert-timestamp" ../configure --host=i686-w64-mingw32.static --enable-static \ && make V=1 -j$(nproc) \ && make install-strip DESTDIR=$(pwd)/destdir) $ mv cross-woe-32/ cross-woe-32-1/ # Same again... $ mv cross-woe-32/ cross-woe-32-2/ $ diff -ru cross-woe-32-1/destdir/ cross-woe-32-2/destdir/ [nothing]

Yay!
I could not reproduce the build timestamp issue in the stripped binaries, though it was still varying in the unstripped src/freedinkedit.exe.

I mentioned there was other changes noticed by diffoscope.

  • Changes in file timestamps.

That one is interesting.
Could be ignored, but we want to generate an identical binary package/archive too, right?
That's where archive meta-data matters.
make INSTALL="$(which install) install -p" could help for static files, but not generated ones.
The doc suggests clamping all files to SOURCE_DATE_EPOCH - i.e. all generated files will have their date set at that timestamp:

$ export SOURCE_DATE_EPOCH=$(date +%s) \ && reprotest --dont-vary build_path \ 'make ... && find destdir/ -newermt "@${SOURCE_DATE_EPOCH}" -print0 | xargs -0r touch --no-dereference --date="@${SOURCE_DATE_EPOCH}"' 'cross-woe-32/destdir/'
  • Changes in directory permissions

Caused by varying umask.
I attempted to mitigate the issue by playing with make install MKDIR_P="mkdir -p -m 755" (1).
However even mkdir -p -m ... does not set permissions for intermediate directories.
Maybe it's better to set and record the umask...

So, aside from minor issues such as BuildIDs and build timestamps, the toolchain is pretty stable as of now.
The issue is more about fixing and recording the build environment.
Which is probably the next challenge

Categories: FLOSS Project Planets

LibrePlanet Day 2, DRM, contributing, and advice

FSF Blogs - Tue, 2017-03-28 14:13

Doctorow presented "Beyond unfree: The software you can go to jail for talking about." Related to his current anti-Digital Restrictions Management (DRM) work, he addressed the wide range of risks threatened by copyright, trademark, and patent laws, as well as the use and institutionalization of DRM. But he did not just paint a bleak image, instead reminding the audience that the fight against DRM and similar restrictions is ongoing. "My software freedom," Doctorow said, "is intersectional."

The day also saw LibrePlanet's first birds of a feather (BoF) sessions. BoFs are self-organized sessions that gather people around a shared interest. Sessions this year included:

  • Liberating the education system
  • Free and open source geospatial technology
  • Peer-to-peer crypto-social networking
  • Collective action for political change
  • A look at Snowdrift.coop
  • and a cryptoparty

Over the course of the weekend, there were two raffle drawings and door prizes courtesy of free software/open hardware companies including Aleph Objects, Technoethical, and ThinkPenguin, as well as DRM-free publisher No Starch Press and local brewery Aeronaut.

The conference closed with Sumana Harihareswara's discussion of things she wishes she had known in 1998, when she first got involved in free software. Drawing inspiration from the work of the theater company the Neo-Futurists, she invited the audience to help her choose from a list of 35 topics by calling out by number the item they wanted to hear about next--until a timer set for 35 minutes ran out. Her topics ranged from technical to personal to the importance of welcoming communities, and she closed by discussing the value of harm reduction in free software. Video of her talk is available now.

Nearly 400 people participated in LibrePlanet 2017, which was powered by 41 amazing volunteers, who did everything from hanging signs, stacking chairs, and sweeping floors to introducing speakers, fielding questions, and running the video streaming system.

Between Saturday and Sunday, there were more than fifty speakers, and almost as many sessions. Some videos of this year's talks are available now and the rest will be added in the next few days.

Categories: FLOSS Project Planets

Joachim Breitner: Birthday greetings communication behaviour

Planet Debian - Tue, 2017-03-28 13:42

Randall Munroe recently mapped how he communicated with his social circle. As I got older recently, I had an opportunity to create a similar statistics that shows how people close to me chose to fulfil their social obligations:

Communication variants

(Diagram created with the xkcd-font and using these two stackoverflow answers.)

In related news: Heating 3½ US cups of water to a boil takes 7 minutes and 40 seconds on one particular gas stove, but only 3 minutes and 50 seconds with an electric kettle, despite the 110V-induced limitation to 1.5kW.

Categories: FLOSS Project Planets

James Oakley: hook_uninstall not running? Drupal schoolboy errors #1

Planet Drupal - Tue, 2017-03-28 12:04

I'll put this here, in case it helps anyone else.

I'm owning up to Drupal Schoolboy Error #1.

I was writing a very simple module. It did so little, that I wanted to keep things as simple as possible — just a .info file, and a .module file.

Blog Category: Drupal Planet
Categories: FLOSS Project Planets

myDropWizard.com: Most common Drupal site building pitfalls and how to avoid them! (Part 3 of 3)

Planet Drupal - Tue, 2017-03-28 10:58

This is the third in a series of articles, in which I'd like to share the most common pitfalls we've seen, so that you can avoid making the same mistakes when building your sites!

myDropWizard offers support and maintenance for Drupal sites that we didn't build initially. We've learned the hard way which site building mistakes have the greatest potential for creating issues later.

And we've seen a lot of sites! Besides our clients, we also do a FREE in-depth site audit as the first step when talking to a potential client, so we've seen loads of additional sites that didn't become customers.

In the first article, we looked at security updates, badly installed module code and challenges ith "patching" modules and themes, as well as specific strategies for addressing each of those problems. In the second article, we looked at how to do the most common Drupal customizations without patching.

In this article, we're going to look at some common misconfigurations that make a site less secure, and how to avoid them!

NOTE: even though they might take a slightly different form depending on the version, most of these same pitfalls apply equally to Drupal 6, 7 and 8! It turns out that bad practices are quite compatible with multiple Drupal versions ;-)

Categories: FLOSS Project Planets

Ishiiruka on openSUSE

Planet KDE - Tue, 2017-03-28 10:12

Quick update: Ishiiruka (fork of the Wii/GameCube emulator Dolphin for those who missed my last post) is now available on openSUSE Leap as well.

In addition to that, all openSUSE builds now build against shared wxWidgets 3.1 instead of statically linking the included one.

Get it from https://software.opensuse.org/package/ishiiruka-dolphin-unstable.


Categories: FLOSS Project Planets

MidCamp - Midwest Drupal Camp: Free Community Drupal Training at MidCamp 2017

Planet Drupal - Tue, 2017-03-28 09:38

It started as a question.  Why do Drupal camps only have trainings on a separate day?  The legitimate answer was that trainings and sessions should not compete for audiences for a normal Drupal camp audience.  But what if the trainings were not for that normal Drupal camp audience?

When all of the hard work was done by the MidCamp venue team to secure the DePaul University Student Center for MidCamp 2017, we immediately started exploring the idea of having half day Drupal training sessions for those that would nornally not attend a Drupal camp.  We reserved two extra rooms for this idea.

While I looked for interested parties who would be interested in attending the trainings, Joseph Purcell of Digital Bridge Solutions worked to organize an Introduction to Making Websites with Drupal itinerary and found great trainers to lead the event: Michael Chase, ‎Instructor at DePaul University College of Computing and Digital Media, and Aaron Meeuwsen, Web Developer at HS2 Solutions, along with several other volunteers, Scott Weston and Matt Ramir from HS2, and Doug Dobrzynski from PMMI Media Group.  Without them, we couldn't have made this happen. I would also like to thank my employer, Xeno Media for contributing some of my time to help organize these trainings.

We've reached out to, and are happy to be working with such great local Chicago technology groups.  Girl Develop It, which provides programs for adult women interested in learning web and software development in a judgment-free environment, the IT Knowledge and Abilities Network (ITKAN), which provides professional networking and growth organization with a focus on professionals and aspiring professionals with disabilities, and Women Who Code, a global non-profit dedicated to inspiring women to excel in technology careers all answered our call.

But we still have room for more.  Do you know of someone who would benefit from free Intro to Drupal Training?  We want you to invite them!

The trainings will cover basic CMS tasks like editing and publishing content, creating navigation menus, and placement of content on the site, and will approach into more complex tasks such as module installation and site configurations. Additionally, we’ll show the various ways the Drupal community can help through the issue queue, meetups, job boards, and mentorship.

The sessions are:

  • Friday March 31st - 9:00 am - 12:00 pm
  • Friday March 31st - 1:00 pm - 4:00 pm
  • Saturday April 1st - 9:00 am - 12:00 pm
  • Saturday April 1st - 1:00 pm - 4:00 pm

Get your free ticket today!

Don't be shy, please share and and invite anyone you think would benefit from these trainings.  Ticket enrollment is open, and we would love to have every seat filled!

Categories: FLOSS Project Planets

Chromatic: Configuring Redis Caching with Drupal 8

Planet Drupal - Tue, 2017-03-28 09:20

How to install and configure Redis caching for Drupal 8.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: nanotime 0.1.2

Planet Debian - Tue, 2017-03-28 07:32

A new minor version of the nanotime package for working with nanosecond timestamps arrived yesterday on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic.

This release just arranges things neatly before Leonardo Silvestri and I may shake things up with a possible shift to doing it all in S4 as we may need the added rigour for nanotime object operations for use in his ztsdb project.

Changes in version 0.1.2 (2017-03-27)
  • The as.integer64 function is now exported as well.

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Django Weekly: Django Weekly 31 - DartCMS, Bot, Celery and more @thedjangoweekly

Planet Python - Tue, 2017-03-28 06:40
Worthy Read
Django 1.11 release candidate 1 releasedRelease notes are here https://docs.djangoproject.com/en/dev/releases/1.11/ .
release
Cloud Hosted Databases9 DBs to choose from, 5 min setup, auto-scaling, Cloud hosted. Free for 30 Days.
sponsor
DartCMSDartCMS is an opensource content management system based on the popular Django Framework.
cms
Creating a Simple Bot Server Using Python, Django and Django-channelsBoilerplate code for me (and hopefully others) to get started with using Django to create a bot server.
bot
Django project layout and settingsExplains the project layout structure and settings
core-django
Serializing Things for Celerycelery
How to dynamically filter ModelChoice's queryset in a ModelForm?orm, forms
Class-Based Views vs. Function-Based ViewsIf you follow my content here in the blog, you probably have already noticed that I’m a big fan of function-based views. Quite often I use them in my examples. I get asked a lot why I don’t use class-based views more frequently. So I thought about sharing my thoughts about that subject matter.
views

Projects
Django, an app at a time - 224 Stars, 34 ForkA heavily commented Django project dedicated to teaching the framework or refresh one's memory.
Categories: FLOSS Project Planets

Darren Mothersele: How to do Everything with PHP Middleware (DrupalCamp London)

Planet Drupal - Tue, 2017-03-28 05:30

At the DrupalCamp in London earlier this month I gave a talk about PHP Middleware. You can see a recording of the talk on YouTube. Here’s a summary, in case you don’t want to watch the whole talk, or the distorted audio upsets you, or if you want the links and references:

Simple vs Easy

I started with a reference to the important talk by Rich Hickey, Simple Made Easy. This is high up on my list of videos every software developer needs to watch. I began here because I think it’s important to identify the difference between simple and easy, to identify where complexity sneaks into our systems. I have found PHP Middleware to be an important tool in the fight against complexity.

“programming, when stripped of all its circumstantial irrelevancies, boils down to no more and no less than very effective thinking so as to avoid unmastered complexity, to very vigorous separation of your many different concerns.

Edsgar W. Dijkstra (1930 - 2002)

De-complecting PHP

I talked a bit about different ways to simplify development with PHP. Including: Domain-driven design, Hexagonal architecture (Ports and Adapters), Framework-independent code, Thin APIs, etc… In particular, I wanted to emphasise the importance of framework-independent code and the benefit of using common interfaces such as the ones developed as PSRs by PHP-FIG.

There was some discussion after about introducing unecessary abstractions, but I think this misses the point. Of course there is a trade off, but the key is to focus on the simplicity, on untwisting things (c.f. Rich Hickey).

De-coupled

Inspired by the Zend Expressive installation procedure, I imagined what Drupal 10 might look like, with fully-decoupled components.

Interfaces

The widespread adoption of PSR7 by the PHP community has lead to the popularity of PHP Middleware-based systems.

Why PSR7 when Symfony HTTP components were so popular? Well, that is an implementation - and rather than standardise on implementation, we should standardise against interfaces.

This allows more interoperability. I showed this pseudocode:

// Take the incoming request from Diactoros $request = ServerRequestFactory::fromGlobals(); $client = new Client(); // Response comes back from Guzzle $response = $client->send($request->withUrl($dest)); $body = simplexml_load_string( $response->getBody()->getContents()); // pass back to Diactoros (new SapiEmitter)->emit($response->withBody($body));

The example uses HTTP requests from Zend Diactoros, forwards them using the Guzzle HTTP client, and returns the response object from Guzzle using the SAPI Emitter from Diactoros.

This demonstrates the power of sharing standard interfaces. Here two packages are used together, both provide an implementation of PSR7 HTTP messages, and they work seamlessly because they both conform to the same interface, despite the differing implementation details.

Decorating Web Apps

This is what a typical web app looks like:

Which can be simplified to this:

A web app takes a request and returns a response.

The concept behind PHP Middleware is that you can decorate the app, to add new functionality, by intercepting the request on the way in, and the response on the way out. This avoids the complexity of intertwining your code throughout the ball of mud.

Here’s an example (pseudocode) for adding CORS functionality to an existing app:

$cors = analyze($request); switch ($cors->getRequestType()) { Case ERR_NO_HOST_HEADER: Case ERR_ORIGIN_NOT_ALLOWED: Case ERR_METHOD_NOT_SUPPORTED: Case ERR_HEADERS_NOT_SUPPORTED: Return createResponse(403); Case TYPE_REQUEST_OUT_OF_CORS_SCOPE: return $APP->process($request); Case TYPE_PRE_FLIGHT_REQUEST: $response = Utils\Factory::createResponse(200); Return $response->withHeaders($cors->getHeaders); default: $response = $APP->process($request); Return $response->withHeaders($cors->getHeaders); }

StackPHP first popularised the concept of middleware in PHP. This diagram is from their website:

There are other popular micro-frameworks based on this concept, such as Slim.

The core of your app is just a thin layer of business logic. Just your domain specific code. The rest can be wrapped in layers which isolate and separate concerns nicely.

Single-pass vs Double-pass

The double pass approach became the most popularly used signature for HTTP middleware, based on Express middleware from the JS community.

It looks like this:

// DOUBLE PASS function __invoke($request, $response, $next) { }

The request and the response are both passed into the middleware, along with a $next delegate that is called to pass control and carry on processing down the chain of middleware.

This double-pass approach is much newer, but used by most of the early adopters of PSR-7.

A single pass approach, looks like this:

// SINGLE PASS / LAMBDA function process($request, $delegate) { }

The issue is with how the response object is dealt with. In the double-pass approach, both are provided. The argument is that this is better for dependency inversion. Using the single pass approach you either need to hard code a dependency on a HTTP message implementation into your middleware when the response is required, or you need to inject a factory for generating the response.

PSR-15 HTTP Middleware

After the success of PSR7, with it’s wide adoption leading to much standardisation and interoperability in PHP frameworks, the next step is to standardise the middleware interface.

This is not yet an accepted PSR. At the time of writing it is still in draft status. It is available for use in the http-interop/http-middleware repo.

Invoker

As an aside, I mentioned the Invoker Interface. As per the docs:

“Who doesn’t need an over-engineered call_user_func()?”

In particular this library really simplifies the process of calling things and injecting dependencies. It also allows to call things using named parameters. I make extensive use of this, and I find making calls with named parameters makes code much easier to understand.

PSR-15 Interfaces

PSR-15 has two interfaces. Both define a method called process. One is the signature that middleware must support, which takes a PSR7 request and a PSR15 delegate. The other interface defines the process method for the delegate. The method on both interfaces is defined as returning a PSR7 response.

So you can compose a chain of middleware, pass in a request and get a response. The request is passed down the chain of middleware until a response is generated which is then passed back up the chain, possibly being decorated along the way.

For want of a better name, I refer to this chain of middleware as a stack. And, I have created a simple Stack Runner to handle the processing of a stack of PSR-15 middleware.

class StackRunner implements DelegateInterface { public function __construct( array $stack, InvokerInterface $invoker, ResponseFactoryInterface $responseFactory ) { ... } public function process(ServerRequestInterface $request) { if (!isset($this->stack[$this->current])) { return $this->responseFactory->createResponse(); } $middleware = $this->stack[$this->current]; $this->current++; return $this->invoker->call([$middleware, 'process'], [ 'request' => $request, 'delegate' => $this, ]); } } ADR (Action Domain Responder)

I went on to talk about ADR as being an adaptation of MVC that is more suitable for use in Web Apps. I’ve found this particularly useful when using Domain-Driven Design, or when used to create thin APIs where you have just a thin layer of business logic on top of a data store.

The issue with MVC is that the template is not the view. The “view” of a web app is the HTTP response, and we split this across our layers, as the body of the response is typically generated by the view, with the knowledge of HTTP being encoded into our controllers. We also bundle together various actions into one controller, which means instantiating the whole thing when we want to run one of the actions.

ADR offers an alternative separation of concerns, where the action methods of the controller are their own separate classes (or in my case anything invokable via the InvokerInterface). I use an InputHandler to deal with parsing the input from the HTTP Request, which the Invoker can then use (via the magic of named arguments).

The domain (Model in MVC terminology) is where the business logic lives. This is called domain, rather than model, to suggest use of domain-driven design.

To use ADR with PHP Middleware, add a resolver to the end of the chain of middleware to dispatch the request to the appropriate Action.

Action

I’ve created a reference implementation of an invokable Action.

Demo!

At this point in my talk I planned to give a demo of how you compose ADR with Middleware to create a working API. Unfortunately, I had some tech issues getting my computer linked up to the projector, and I was starting to feel really ill (full of cold). By this time the caffeine was starting to wear off, and I needed the talk to end!

I’ve put the example code up in a GitHub repo.

References
  • Simple Made Easy - talk by Rich Hickey
  • HTTP Middleware and HTTP Factory interfaces.
  • PSR15 Middlewares a set of really useful middlewares that can be used with a PSR15 middleware dispatcher.
  • Stack Runner my reference implementation of a very simple stack runner for executing a chain of PSR15 middleware.
  • Wafer an experimental implementation of the ADR idea to be used along with PSR15 middleware and the stack runner.

Drop me a line with any feedback. Thanks!

Categories: FLOSS Project Planets
Syndicate content