FLOSS Project Planets

Bryan Pendleton: Sapiens: a very short review

Planet Apache - Sun, 2017-03-19 12:50

Yuval Noah Harari is the writer of the moment, having taken the world by storm with his Sapiens: A Brief History of Humankind, and having now finished his follow-up, Homo Deus: A Brief History of Tomorrow.

I've now read Sapiens, which is both readable and thought-provoking, no easy accomplishment.

Harari is certainly ambitious. As I read Sapiens, I amused myself by pretending to be a library cataloger, faced with the task of trying to assign appropriate subject categories under which Sapiens should be listed.

The list would surely have to include: history; biology; archaeology; anthropology; economics; cosmology; evolutionary biology; linguistics; political science; ecology; globalism; religious studies; cognitive science; philosophy.

And surely more.

But that's not adequate either, for you'd want to be more precise that just saying "history", rather: world history; cultural history; ancient history; history of language; military history; world exploration; religious history; history of science; literary history; etc.

Oh, you could go on for hours and hours.

So, Sapiens is very much a book written by an intellectual omnivore, which will most likely appeal to omnivorous readers, by which I mean those who don't want to spend their time reading history books that get trapped for many pages on the individual details of precisely what happened on such-and-such a day, but instead feel like it's reasonable to try to cover the 100,000 year history of mankind on earth in, say, 400 pages or so.

It actually works out better than the previous sentence makes it sound, for Harari is a fine writer and he moves things along briskly.

I think that the strongest and most interesting argument that Sapiens makes is a linguistic one, rooted in the power of the concept of abstraction.

Discussing the evolution of language itself, Harari observes that many species of animal have languages and can communicate, typically using their language abilities to communicate information about food, danger, reproduction, and other universal topics. However:

the truly unique feature of our language is not its ability to transmit information about men and lions. Rather, it's the ability to transmit information about things that do not exist at all. As far as we know, only Sapiens can talk about entire kinds of entities that they have never seen, touched or smelled.

Legends, myths, gods and religions appeared for the first time with the Cognitive Revolution. Many animals and human species could previously say, 'Careful! A lion!' Thanks to the Cognitive Revolution, Homo Sapiens acquired the ability to say, 'The lion is the guardian spirit of our tribe.' This ability to speak about fictions is the most unique feature of Sapiens language.

Although, superficially, this seems to be a discussion about telling entertaining stories around the campfire, or fabricating super-natural explanations as the basis for the founding of religions, Harari quickly re-orients this discussion in a much more practical direction:

fiction has enabled us not merely to imagine things, but to do so collectively.


Such myths give Sapiens the unprecedented ability to cooperate flexibly in large numbers [...] with countless numbers of strangers.

It's that "with ... strangers" part that is so important, as Harari proceeds to demonstrate how this ability to discuss hypothetical scenarios with people who aren't part of your immediate circle of family and friends is what gives rise to things like corporate finance, systems of justice, the scientific method, etc. All of these things are built on the ability to have abstractions:

In what sense can we say that Peugeot SA (the company's official name) exists? There are many Peugeot vehicles, but these are obviously not the company. Even if every Peugeot in the world were simultaneously junked and sold for scrap metal, Peugeot SA would not disappear.


Peugeot is a figment of our collective imagination. Lawyers call this a 'legal fiction.' It can't be pointed at; it is not a physical object. But it exists as a legal entity. Just like you or me, it is bound by the laws of the countries in which it operates. It can open a bank account and own property. It pays taxes, and it can be sued and even prosecuted separately from any of the people who own or work for it.

Ostensibly, Sapiens is a history; that is, it is a book about the past, helping us understand what came before, and how it led us to what is now.

But, as is perhaps universally true, Harari is not actually that terribly interested in what happened in the past, often breezily sweeping whole questions aside with a sort of "it's gone; it's forgotten; we have no accurate evidence; we cannot know for sure" superficiality that is startling.

Rather, as Harari reveals near the end of his book, he is principally interested in the future, and it's here where Sapiens takes a rather unexpected turn.

I must admit, I was wholly unprepared when, just pages before the end of Sapiens, Harari suddenly introduces the topic of "Intelligent Design".

However, it turns out that Harari doesn't mean the term in the sense in which it is typically used; he is firmly in the Darwin/Russell camp.

Rather, Harari is fascinated by the idea that scientific methods may have arrived at the point where humans will soon be capable of intelligent design in the future:

After 4 billion years of natural selection, Alba stands at the dawn of a new cosmic era, in which life will be ruled by intelligent design.


Biologists the world over are locked in battle with the intelligent-design movement, which opposes the teaching of Darwinian evolution in schools and claims that biological complexity proves there must be a creator who thought out all biological details in advance. The biologists are right about the past, but the proponents of intelligent design might, ironically, be right about the future.

At the time of writing, the replacement of natural selection by intelligent design could happen in any of three ways: through biological engineering, cyborg engineering (cyborgs are beings who combine organic with non-organic parts) or the engineering of in-organic life.

If Harari painted with a broad brush when discussing the past, his descriptions of our near-term future are equally vague and loosely-grounded, and those final 25 pages of Sapiens are a rather bewildering peek into "what might be."

But, as Yogi Berra pointed out, "predictions are hard, especially about the future," so I can't fault Harari too much for wanting to have a go at what might come next.

I imagine that, eventually, I will read more of Harari's work, as it's clear he has a lot of interesting things to say.

And if you haven't read Sapiens yet, you probably won't regret it, it's quite good.

Categories: FLOSS Project Planets

PyBites: Twitter digest 2017 week 11

Planet Python - Sun, 2017-03-19 10:42

Every weekend we share a curated list of 15 cool things (mostly Python) that we found / tweeted throughout the week.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: Rcpp 0.12.10: Some small fixes

Planet Debian - Sun, 2017-03-19 09:39

The tenth update in the 0.12.* series of Rcpp just made it to the main CRAN repository providing GNU R with by now over 10,000 packages. Windows binaries for Rcpp, as well as updated Debian packages will follow in due course. This 0.12.10 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, and the 0.12.9 release in January --- making it the fourteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 975 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by sixtynine packages over the two months since the last release -- or just over a package a day!

The changes in this release are almost exclusively minor bugfixes and enhancements to documentation and features: James "coatless" Balamuta rounded out the API, Iñaki Ucar fixed a bug concerning one-character output, Jeroen Ooms allowed for finalizers on XPtr objects, Nathan Russell corrected handling of lower (upper) triangular matrices, Dan Dillon and I dealt with Intel compiler quirks for his algorithm.h header, and I added a C++17 plugin along with some (overdue!) documentation regarding the various C++ standards that are supported by Rcpp (which is in essence whatever your compiler supports, i.e., C++98, C++11, C++14 all the way to C++17 but always keep in mind what CRAN and different users may deploy).

Changes in Rcpp version 0.12.10 (2017-03-17)
  • Changes in Rcpp API:

    • Added new size attribute aliases for number of rows and columns in DataFrame (James Balamuta in #638 addressing #630).

    • Fixed single-character handling in Rstreambuf (Iñaki Ucar in #649 addressing #647).

    • XPtr gains a parameter finalizeOnExit to enable running the finalizer when R quits (Jeroen Ooms in #656 addressing #655).

  • Changes in Rcpp Sugar:

    • Fixed sugar functions upper_tri() and lower_tri() (Nathan Russell in #642 addressing #641).

    • The algorithm.h file now accomodates the Intel compiler (Dirk in #643 and Dan in #645 addressing issue #640).

  • Changes in Rcpp Attributes

    • The C++17 standard is supported with a new plugin (used eg for g++-6.2).
  • Changes in Rcpp Documentation:

    • An overdue explanation of how C++11, C++14, and C++17 can be used was added to the Rcpp FAQ.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

How to get 100 million stars in KStars

Planet KDE - Sun, 2017-03-19 09:27
USNO NOMAD star catalog which contains ~100 million stars has been available in KStars for many years, but it appears many KStars users do not know how to get the catalog up and running.

The primary problem is its sheer size (1.4 GB) which tends to fail when being downloaded via KStars Download New Data tool. So here is a quick guide on how to obtain this catalog.

USNO NOMAD requires Tycho-2 catalog to be installed first. It is a relatively smaller download at only 32MB and can be safely installed using the Download New Data tool. Using the keyboard, click Ctrl + N to bring up the dialog, or go to Data → Download.

Navigate to Tycho-2 and click Install.

Wait until Tycho-2 is downloaded and installed. Now download the USNO NOMAD Catalog. Please either use a download manager to download the file, or use wget from the console. To use wget, open a console and type:

wget https://files.kde.org/edu/kstars/download.kde.org/kstars/USNO-NOMAD-1e8-1.0.tar.gz

Alternatively, you might want to checkout the mirror list first to download the files from a mirror close to you. After downloading the file, extract it and copy USNO-NOMAD-1e8.dat to ~/.local/share/kstars

If you are using console:

tar -xzf USNO-NOMAD-1e8-1.0.tar.gz
cp USNO-NOMAD-1e8.dat ~/.local/share/kstars

Now restart KStars, and go to Settings → Configure KStars. You'll see the Star Catalogs density slider, move it up and click Apply. You can control how many stars KStars draw on the screen, the more stars, the more resources it would take to render them, so adjust the slider carefully.

And if all goes well, you should have millions of stars on in your KStars Sky Map, enjoy!
Categories: FLOSS Project Planets

Claus Ibsen: Apache Camel first commit was 10 years ago on March 19th

Planet Apache - Sun, 2017-03-19 07:55
Today marks a very special day as it was exactly 10 years ago the first commit of Apache Camel was done by its creator James Strachan.

Added Mon Mar 19 10:54:57 2007 UTC (10 years ago) by jstrachanInitial checkin of Camel routing library
The project was created as a sub-project to Apache ActiveMQ and back then github did not exists, so its using good old subversion.

In summer 2007 the first release of Apache Camel was published, which happened on July 2nd so lets wait until the summer to celebrate it's 10 years birthday.

Categories: FLOSS Project Planets

S. Lott: Simple CSV Transformations

Planet Python - Sun, 2017-03-19 03:07
Here's an interesting question:

I came across your blog post "Introduction to using Python to process CSV files" as I'm looking to do something I'd think is easy in Python but I don't know how to do it. 
I simply want to examine a column then create a new column based on an if-then on the original column. So if my CSV has a "gender" field I'd like to do the Python equivalent of this SQL statement: 
case when gender = 'M' then 1 else 0 end as gender_m, case when gender = 'F' then 1 else 0 end as gender_f,...
I can do it in Pandas but my CSVs are too big and I run into memory issues. 
There are a number of ways to tackle this.

First -- and foremost -- this is almost always just one step in a much longer and more complex set of operations. It's a little misleading to read-and-write a CSV file to do this.

A little misleading.

It's not wrong to write a file with expanded data. But the "incrementally write new files" process can become rather complex. If we have a large number of transformations, we can wind up with many individual file-expansion steps. These things often grow organically and can get out of control. A complex set of steps should probably be collapsed into a single program that handles all of the expansions at once.

This kind of file-expansion is simple and fast. It can open a door previously closed by the in-memory problem  of trying to do the entire thing in pandas.

The general outline looks like this

from pathlib import Path
import csv
source_path = Path("some_file.csv")
target_path = Path(source_path.stem + "_1").with_suffix('.csv')

def transform(row):
return row

with source_path.open() as source_file:
    with target_path.open('w', newline='') as target_file:
        reader = csv.DictReader(source_file)
        columns =  reader.fieldnames + ['gender_m', 'gender_f']
        writer = csv.DictWriter(target_file, columns)
        for row in reader:
            new_row = transform(row)

The goal is to be able put some meaningful transformation processing in place of the build new_row comment.

The overall approach is this.

1. Create Path objects to refer to the relevant files.

2. Use with-statement context managers to handle the open files. This assures that the files are always properly closed no matter what kinds of exceptions are raised.

3. Create a dictionary-based reader for the input.  Add the additional columns and create a dictionary-based writer for the output. This allows the processing to work with each row of data as a dictionary.
This presumes that the data file actually has a single row of heading information with column names.

If column names are missing, then a fieldnames attribute can be provided when creating the DictReader(), like this: csv.DictReader(source_file, ['field', 'field', ...]).

The for statement works because a csv Reader is an iterator over each row of data.

I've omitted any definition of the transformational function. Right now, it just returns each row unmodified. We'd really like it to do some useful work.

Building The New RowThe transformation function needs to build a new row from an existing row.

Each row will be a Python dictionary. A dictionary is a mutable object. We aren't really building a completely new object -- that's a waste of memory. We'll modify the row object, and return it anyway. It will involve a microscopic redundancy of creating two references to the same dictionary object, one known by the variable name row and the other know by new_row.

Here's an example body for transform()

def transform(row):
row['gender_m'] = 1 if row['gender'] == 'M' else 0
row['gender_f'] = 1 if row['gender'] == 'F' else 0
return row

This will build two new keys in the row dictionary. The exact two keys added to the fieldnames to write a new file.

Each key be associated with a value computed by a simple expression. In this case, the logical if-else operator is used to map a boolean value, row['gender'] == 'M', to one of two integer values, 1 or 0.

If this is confusing -- and it can be -- this can also be done with if statements instead of expressions.

def transform(row):
if row['gender'] == 'M':
row['gender_m'] = 1
row['gender_m'] = 0
row['gender_f'] = 1 if row['gender'] == 'F' else 0
return row

I only rewrite the 'M' case. I'll leave the rewrite of the 'F' case to the reader.
Faster Processing with a GeneratorWe can simplify the body of the script slightly. This will make it work a hair faster. The following statements involve a little bit of needless overhead.

        for row in reader:
            new_row = transform(row)

We can change this as follows:

        writer.writerows(transform(row) for row in reader)

This uses a generator expression, transform(row) for row in reader, to build individually transformed rows from a source of data. This doesn't involve executing two statements for each row of data. Therefore, it's faster.

We can also reframe it like this.

        writer.writerows(map(transform, reader))

In this example, we've replaced the generator expression with the map() function. This applies the transform() function to each row available in the reader.

In both cases, the writer.writerows() consumes the data produced by the generator expression or the map() function to create the output file.

The idea is that we can make the transform() function as complex as we need. We just have to be sure that all the new field names are handled properly when creating the writer object.

Categories: FLOSS Project Planets

Petter Reinholdtsen: Free software archive system Nikita now able to store documents

Planet Debian - Sun, 2017-03-19 03:00

The Nikita Noark 5 core project is implementing the Norwegian standard for keeping an electronic archive of government documents. The Noark 5 standard document the requirement for data systems used by the archives in the Norwegian government, and the Noark 5 web interface specification document a REST web service for storing, searching and retrieving documents and metadata in such archive. I've been involved in the project since a few weeks before Christmas, when the Norwegian Unix User Group announced it supported the project. I believe this is an important project, and hope it can make it possible for the government archives in the future to use free software to keep the archives we citizens depend on. But as I do not hold such archive myself, personally my first use case is to store and analyse public mail journal metadata published from the government. I find it useful to have a clear use case in mind when developing, to make sure the system scratches one of my itches.

If you would like to help make sure there is a free software alternatives for the archives, please join our IRC channel (#nikita on irc.freenode.net) and the project mailing list.

When I got involved, the web service could store metadata about documents. But a few weeks ago, a new milestone was reached when it became possible to store full text documents too. Yesterday, I completed an implementation of a command line tool archive-pdf to upload a PDF file to the archive using this API. The tool is very simple at the moment, and find existing fonds, series and files while asking the user to select which one to use if more than one exist. Once a file is identified, the PDF is associated with the file and uploaded, using the title extracted from the PDF itself. The process is fairly similar to visiting the archive, opening a cabinet, locating a file and storing a piece of paper in the archive. Here is a test run directly after populating the database with test data using our API tester:

~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446 using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446 0 - Title of the test case file created 2017-03-18T23:49:32.103446 1 - Title of the test file created 2017-03-18T23:49:32.103446 Select which mappe you want (or search term): 0 Uploading mangelmelding/mangler.pdf PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446 ~/src//noark5-tester$

You can see here how the fonds (arkiv) and serie (arkivdel) only had one option, while the user need to choose which file (mappe) to use among the two created by the API tester. The archive-pdf tool can be found in the git repository for the API tester.

In the project, I have been mostly working on the API tester so far, while getting to know the code base. The API tester currently use the HATEOAS links to traverse the entire exposed service API and verify that the exposed operations and objects match the specification, as well as trying to create objects holding metadata and uploading a simple XML file to store. The tester has proved very useful for finding flaws in our implementation, as well as flaws in the reference site and the specification.

The test document I uploaded is a summary of all the specification defects we have collected so far while implementing the web service. There are several unclear and conflicting parts of the specification, and we have started writing down the questions we get from implementing it. We use a format inspired by how The Austin Group collect defect reports for the POSIX standard with their instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :).

The Nikita project is implemented using Java and Spring, and is fairly easy to get up and running using Docker containers for those that want to test the current code base. The API tester is implemented in Python.

Categories: FLOSS Project Planets

Clint Adams: Measure once, devein twice

Planet Debian - Sun, 2017-03-19 00:38

Ophira lived in a wee house in University Square, Tampa. It had one floor, three bedrooms, two baths, a handful of family members, a couple pets, some plants, and an occasional staring contest.

Mauricio lived in Lowry Park North, but Ophira wasn’t allowed to go there because Mauricio was afraid that someone would tell his girlfriend. Ophira didn’t like Mauricio’s girlfriend and Mauricio’s girlfriend did not like Ophira.

Mauricio did not bring his girlfriend along when he and Ophira went to St. Pete Beach. They frolicked in the ocean water, and attempted to have sex. Mauricio and Ophira were big fans of science, so Somewhat quickly they concluded that it is impossible to have sex underwater, and absconded to Ophira’s car to have sex therein.

“I hate Mauricio’s girlfriend,” Ophira told Amit on the telephone. “She’s not even pretty.”

“Hey, listen,” said Amit. “I’m going to a wedding on Captiva.”

“Oh, my family used to go to Captiva every year. There’s bioluminescent algae and little crabs and stuff.”

“Yeah? Do you want to come along? You could pick me up at the airport.”

“Why would I want to go to a wedding?”

“Well, it’s on the beach and they’re going to have a bouncy castle.”

“A bouncy castle‽ Are you serious?”


“Well, okay.”

Amit prepared to go to the wedding and Ophira became terse then unresponsive. After he landed at RSW, he called Ophira, but instead of answering the phone she startled and fell out of her chair. Amit arranged for other transportation toward the Sanibel Causeway. Ophira bit her nails for a few hours, then went to her car and drove to Cape Coral.

Ophira cruised around Cape Coral for a while, until she spotted a teenager cleaning a minivan. She parked her car and approached him.

“Whatcha doing?” asked Ophira, pretending to chew on imaginary gum.

The youth slid the minivan door open. “I’m cleaning,” he said hesitantly.

“Didn’t your parents teach you not to talk to strangers? I could do all kinds of horrible things to you.”

They conversed for a bit. She recounted a story of her personal hero, a twelve-year-old girl who seduced and manipulated older men into ruin. She rehashed the mysteries of Mauricio’s girlfriend. She waxed poetic on her love of bouncy castles. The youth listened, hypnotized.

“What’s your name, kid?” Ophira yawned.

“Arjun,” he replied.

“How old are you?”

Arjun thought about it. “15,” he said.

“Hmm,” Ophira stroked her chin. “Can you sneak me into your room so that your parents never find out about it?”

Arjun’s eyes went wide.

MEANWHILE, on Captiva Island, Amit had learned that even though the Tenderly had multiple indoor jacuzzis, General Fitzpatrick and Mrs. Fitzpatrick had decided it prudent to have sex in the hot tub on the deck; that the execution of this plan had somehow necessitated a lengthy cleaning process before the hot tub could be used again; that that’s why workmen were cleaning the hot tub; and that the Fitzpatrick children had gotten General Fitzpatrick and Mrs. Fitzpatrick to agree to not do that again, with an added suggestion that they not be seen doing anything else naked in public.

A girl walked up to Amit. “Hey, I heard you lost your plus-one. Are you here alone? What a loser!” she giggled nervously, then stared.

“Leave me alone, Darlene,” sighed Amit.

Darlene’s face reddened as she spun on her heels and stormed over to Lisette. “Oh my god, did you see that? I practically threw myself at him and he was abusive toward me. He probably has all the classic signs of being an abuser. Did you hear about that girl he dated in Ohio? I bet I know why that ended.”

“Oh really?” said Lisette distractedly, looking Amit up and down. “So he’s single now?”

Darlene glared at Lisette as Amit wandered back outside to stare at the hot tub.

“Hey kid,” said Ophira, “bring me some snacks.”

“I don’t bring food into my room,” said Arjun. “It attracts pests.”

“Is that what your parents told you?” scoffed Ophira. “Don’t be such a wuss.”

Three minutes later, Ophira was finishing a bag of paprika puffs. “These are great, Arjun! Where do you get these?”

“My cousin sends them from Europe,” he explained.

“Now get me a diet soda.”

Amit strolled along the beach, then yelped. “What’s biting my legs?” he cried out.

“Those are sand fleas,” said Nessarose.

“What are sand fleas?” asked Amit incredulously.

Nessarose rolled her eyes. “Stop being a baby and have a drink.”

After the sun went down, Amit began to notice the crabs, and this made him drink more.

When everyone was soused, General Fitzpatrick announced that they were going for a swim in the Gulf, in direct contravention of safety guidelines. Most of the guests were wise enough to refuse, but an eightsome swam out, occasionally stopping to slap the algae, but continuing until they reached the sandbar that General Fitzpatrick correctly claimed was there.

Then screams echoed through the night as all the jellyfish attacked everyone invading their sandbar.

The crestfallen swimming party eventually made it back to shore.

“Pee on the jellyfish sting,” commanded Nessarose. “It’s the best cure.”

“No!” shouted General Fitzpatrick’s daughter. “Urine makes it worse.”

Things quickly escalated from Nessarose and General Fitzpatrick’s daughter screaming at each other to the beach dividing into three factions: those siding with Nessarose, those siding with General Fitzpatrick’s daughter, and those who had no idea what was going on. General Fitzpatrick had no interest in any of this, and went straight to bed.

“It’s getting late, kid,” said Ophira. “I’m taking your bed.”

“What?” squeaked Arjun.

“Look,” said Ophira, “your bed is small and there isn’t room for both of us. You may sleep on the floor if you’re quiet and don’t bother me.”

“What?” squeaked Arjun.

“Are you deaf, kid?” Ophira grunted and then went to bed.

Arjun blinked in confusion, then tried to fall asleep on the floor, without much success.

Ophira got up in the morning and said, “Before I go, I want to teach you a valuable lesson.”

“What?” groaned Arjun, getting to his feet.

“You should be careful talking to strangers. Now, I told you that I could do horrible things to you, so this is not my fault; it’s yours,” she announced, then sucker-punched him in the gut.

Ophira climbed out the window as Arjun doubled over.

As the ceremony began, only a small minority of the wedding party was visibly suffering from jellyfish stings, which may or may not have helped with ignoring the sand fleas.

The ceremony ended shortly thereafter, and now that marriage had been accomplished, everyone turned their attention to food and drink and swimming less irresponsibly than the night before. Guests that needed to return home sooner departed in waves and Amit started to appreciate the more peaceful environment.

He heard the deck door slide open behind him and turned his attention away from the hot tub.

“Hey, mofo,” Ophira shouted as strode stylishly out onto the deck. “Where’s this bouncy castle?”

Amit blinked in surprise. “That was yesterday. You missed it.”

“Oh,” she frowned. “So I met this South Slav guy with a really sexy forehead, and I need some advice. I don’t know if I should call him or wait.”

Amit pointed to the hot tub and told her the story of General Fitzpatrick and Mrs. Fitzpatrick and the hot tub.

“What?” said Ophira. “How could they have sex underwater?”

“What do you mean?” asked Amit.

“Well, it’s impossible,” she replied.

Posted on 2017-03-19 Tags: mintings
Categories: FLOSS Project Planets

DrupalEasy: DE Live: Drupal 9 Reaction

Planet Drupal - Sat, 2017-03-18 21:07

Direct .mp3 file download.

A quick live podcast featuring a reaction from Mike and Ryan about Dries' Drupal 9 Blog Post. Recorded on YouTube Live, and this audio version is reposted to our podcast channel for your convenience.

DrupalEasy News Follow us on Twitter Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Categories: FLOSS Project Planets

DrupalEasy: DrupalEasy Podcast 192 - 8+ Reasons to Love Drupal 8+

Planet Drupal - Sat, 2017-03-18 20:15

Direct .mp3 file download.

Almost all of the DrupalEasy Podcast hosts congregate to take a look back and a look forward at Drupal 8. We discuss some of our favorite things about Drupal 8 as well as what we're looking forward to the most in the coming year. Also, Anna provides us with a first-person look at DrupalCamp Northern Lights (Iceland), and Ted leads a discussion on Drupal 8.3.


Our favorite things about Drupal 8 (so far).

  • Mike - everything you can do with just core, plugins.
  • Ted - object-oriented codebase, experimental modules.
  • Ryan - configuration management, migrate in core.
  • Anna - module and theme libraries in core and base themes in core, view modes.
  • Andrew - Restful services in core, Composer all the things.

What are we looking forward to the most in the Drupal universe in 2017?

DrupalEasy News Three Stories Sponsors Upcoming Events Follow us on Twitter Five Questions (answers only)
  1. Brewing beer.
  2. Windows Subsystem for Linux.
  3. Hiking the Appalachian trail (Jim Smith's blog).
  4. Giraffe.
  5. Doing three Drupal sites in three months, the first Orlando Drupal meetups.
Intro Music Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Categories: FLOSS Project Planets

DSPIllustrations.com: Fourier Series and Harmonic Approximation

Planet Python - Sat, 2017-03-18 18:55
The Fourier Series and Harmonic Approximation

In this article, we will walk through the origins of the Fourier transform: the Fourier Series. The Fourier series takes a periodic signal x(t) and describes it as a sum of sine and cosine waves. Noting that sine and cosine are themselves periodic functions, it becomes clear that x(t) is also a periodic function.

Mathematically, the Fourier series is described as follows. Let x(t) be a periodic function with period T, i.e.

x(t)=x(t+nT), n\in\mathbb{Z}.

Then, we can write x(t) as a Fourier series by

x(t)=\frac{a_0}{2}+\sum_{n=1}^{\infty}a_n\cos(2\pi \frac{nt}{T})+b_n\sin(2\pi\frac{nt}{T}),

where a_n and b_n are the coefficients of the Fourier series. They can be calculated by \begin{align}a_n&=\frac{2}{T}\int_0^Tx(t)\cos(2\pi \frac{nt}{T})dt\\ b_n&=\frac{2}{T}\int_0^Tx(t)\sin(2\pi \frac{nt}{T})dt\end{align}.

Note that for a function with period T, the frequencies of the sines and cosines are \frac{1}{T}, \frac{2}{T}, \frac{3}{T}, \dots, i.e. they are multiples of the fundamental frequency \frac{1}{T}, which is the inverse period duration of the function. Therefore the frequency \frac{n}{T} is called the nth harmonic. The name harmonic stems from the fact for the human ear frequencies with integer ratios sound "nice", and the frequencies are all integer multiples of the fundamental frequency.

Let us verify the calculation of the Fourier coefficients and the function reconstruction numerically. First, we ...

Categories: FLOSS Project Planets

Mike Driscoll: Python 101 – An Intro to IDLE

Planet Python - Sat, 2017-03-18 16:29

Python comes with its own code editor: IDLE (Integreted Development and Learning Environment). There is some lore that the name for IDLE comes from Eric Idle, an actor in Monty Python. An IDE is an editor for programmers that provides color highlighting of key words in the language, auto-complete, an “experimental” debugger and lots of other fun things. You can find an IDE for most popular languages and a number of IDEs will work with multiple languages. IDLE is kind of a lite IDE, but it does have all those items mentioned. It allows the programmer to write Python and debug their code quite easily. The reason I call it “lite” is the debugger is very basic and it’s missing other features that programmers who have a background using products like Visual Studio will miss. You might also like to know that IDLE was created using Tkinter, a Python GUI toolkit that comes with Python.

To open up IDLE, you will need to find it and you’ll see something like this:

Yes, it’s a Python shell where you can type short scripts and see their output immediately and even interact with code in real time. There is no compiling of the code as Python is an interpretive language and runs in the Python interpreter. Let’s write your first program now. Type the following after the command prompt (>>>) in IDLE:

print("Hello from Python!")

You have just written your first program! All your program does is write a string to the screen, but you’ll find that very helpful later on. Please note that the **print** statement has changed in Python 3.x. In Python 2.x, you would have written the above like this:

print "Hello from Python!"

In Python 3, the print statement was turned into a print function, which is why parentheses are required. You will learn what functions are in chapter 10.

If you want to save your code into a file, go to the File menu and choose New Window (or press CTRL+N). Now you can type in your program and save it here. The primary benefit of using the Python shell is that you can experiment with small snippets to see how your code will behave before you put the code into a real program. The code editor screen looks a little different than the IDLE screenshot above:

Now we’ll spend a little time looking at IDLE’s other useful features.

Python comes with lots of modules and packages that you can import to add new features. For example, you can import the math module for all kinds of good math functions, like square roots, cosines, etcetera. In the File menu, you’ll find a Path Browser which is useful for figuring out where Python looks for module imports. You see, Python first looks in the same directory as the script that is running to see if the file it needs to import is there. Then it checks a predefined list of other locations. You can actually add and remove locations as well. The Path Browser will show you where these files are located on your hard drive, if you have imported anything. My Path Browser looks like this:

Next there’s a Class Browser that will help you navigate your code. Frankly it would make more sense if this menu option was called “Module Browser” as that is much closer to what you’ll actually be doing. This is actually something that won’t be very useful to you right now, but will be in the future. You’ll find it helpful when you have lots of lines of code in a single file as it will give you a “tree-like” interface for your code. Note that you won’t be able to load the Class Browser unless you have actually saved your program.

The Edit menu has your typical features, such as Copy, Cut, Paste, Undo, Redo and Select All. It also contains various ways to search your code and do a search and replace. Finally, the Edit menu has some menu items that will show you various things, such as highlighting parentheses or displaying the auto-complete list.

The Format menu has lots of useful functionality. It has some helpful items for indenting and dedenting your code, as well as commenting out your code. I find that pretty helpful when I’m testing my code. Commenting out your code can be very helpful. One way it can be helpful is when you have a lot of code and you need to find out why it’s not working correctly. Commenting out portions of it and re-running the script can help you figure out where you went wrong. You just go along slowly uncommenting out stuff until you hit your bug. Which reminds me; you may have noticed that the main IDLE screen has a Debugger menu.

That is nice for debugging, but only in the Shell window. Sadly you cannot use the debugger in your main editing menu. However you can run a module with debugging turned on such that you are able to interact with your program’s objects. This can be useful in loops where you are trying to determine the current value of an item inside the loop, for example. If you happen to be using tkinter to create a user interface (UI), you can actually leave the mainloop() call off (which can block the UI) so you can debug your user interface. Finally, when an exception is raised with your debugger running, you can double-click the exception to jump directly to the code where the exception happened.

If you need a more versatile debugger, you should either find a different IDE or try Python’s debugger found in the pdb library.

The Run menu has a couple of handy options. You can use it to bring up the Python Shell, check your code for errors, or run your code. The Options menu doesn’t have very many items. It does have a Configure option that allows you to change the code highlighting colors, fonts and key shortcuts. Other than that, you get a Code Context option that is helpful in that it puts an overlay in the editing window which will show you which class or function you’re currently in. You will find this feature is useful whenever you have a lot of code in a function and the name has scrolled off the top of the screen. With this option enabled, that doesn’t happen. Of course, if the function is too large to fit on one screen, then it may be getting too long and it could be time to break that function down into multiple functions. The other neat item in the Settings dialog is under the General tab where you can add other documentation. What this means is that you can add URLs to 3rd Party documentation, such as SQLAlchemy or pillow, and have it pulled into IDLE. To access the new documentation, just jump to the Help menu.

The Windows menu shows you a list of currently open Windows and allows you to switch between them.

Last but not least is the Help menu where you can learn about IDLE, get help with IDLE itself or load up a local copy of the Python documentation. The documentation will explain how each piece of Python works and is pretty exhaustive in its coverage. The Help menu is probably the most helpful in that you can get access to the docs even when you’re not connected to the internet. You can search the documentation, find HOWTOs, read about any of the builtin libraries, and learn so much your head will probably start spinning.

Wrapping Up

In this article we learned how to use Python’s integrated development environment, IDLE. At this point, you should be familiar enough with IDLE to use it on your own. There are many other integrated development environments (IDEs) for Python. There are free ones like PyDev and Editra, and there are some others that you have to pay for, such as WingWare and PyCharm, although they both have free versions too. There are also plug-ins for regular text editors that allow you to code in Python too. I think IDLE is a good place to start, but if you already have a favorite editor, feel free to continue using that.

If you happen to be a visual learner, I also created a screencast version of this tutorial:

This is from my Python 101 Screencast

Categories: FLOSS Project Planets

Programming Ideas With Jake: A Problem With Python’s Code Blocks

Planet Python - Sat, 2017-03-18 15:30
Python's code blocks don't restrict scope in any way. Not even in the important way.
Categories: FLOSS Project Planets

Bryan Pendleton: Bands I've been listening to recently ...

Planet Apache - Sat, 2017-03-18 14:11

... ranked by the number of their albums I've got.

  • Band of Horses: 5
  • Blind Pilot: 3
  • Mumford & Sons: 3
  • Fleet Foxes: 3
  • Lumineers: 2
  • Lord Huron: 2
  • Of Monsters and Men: 2
  • Johnny Flynn: 2
  • Judah and the Lion: 1
  • The Revivalists: 1
  • Susto: 1

Who else should I be listening to? Gregory Alan Isakov? First Aid Kit? Nathanial Rateliff? Somebody else entirely?

And when will there be new work from The Lumineers, Lord Huron, Mumford & Sons, or Fleet Foxes?

Categories: FLOSS Project Planets

PyBites: Code Challenge 10 - Build a Hangman Game - Review

Planet Python - Sat, 2017-03-18 13:00

It's end of the week again so we review the code challenge of this week. It's never late to sign up, just fork our challenges repo and start coding.

Categories: FLOSS Project Planets

Kubuntu has a new member: Darin Miller

Planet KDE - Sat, 2017-03-18 12:46

Today at 15:58 UTC the Kubuntu Council approved Darin Miller’s application for becoming a Kubuntu Member.

Darin has been coming to the development channel and taking part in the informal developer meetings on Big Blue Button for a while now, helping out were he can with the packaging and continuous integration.  His efforts have already made a huge difference.

Here’s a snippet of his interview:

<DarinMiller> I have contributed very little independently, but I have helped fix lintian issues, control files deps, and made a very minor mod to one of the KA scripts.
<clivejo> minor mod?
<acheronuk> very useful mod IIR ^^^
<clivejo> I think you are selling yourself short there!
-*- clivejo was very frustrated with the tooling prior to that fix
<DarinMiller> From coding perspective, it was well within my skillset, so the mod seemed minor to me.
<clivejo> well it was much appreciated
<yofel> when did you start hanging out here and how did you end up in this channel?
<DarinMiller> That’s another reason I like this team. I feel my efforts are appreciated.
<DarinMiller> And that encourages me to want to do more.

He is obviously a very modest chap and the Kubuntu team would like to offer him a very warm welcome, as well as greeting him with our hugs and the list of jobs / work to be done!

For those interested here’s Darin’s wiki page: https://wiki.kubuntu.org/~darinmiller and his Launchpad page: https://launchpad.net/~darinmiller

The meeting log is available here.

Categories: FLOSS Project Planets

Community Over Code: What Apache needs in a Board

Planet Apache - Sat, 2017-03-18 12:06

The ASF is holding it’s annual member’s meeting soon, where we will elect a new 9-member Board of Directors for a one-year term.  I’ve been honored with a nomination to run for the board again, as have a number of other excellent Member candidates.  While I’m writing my nomination statement – my 2016 director statement and earlier ones are posted – I’ve been thinking about what Apache really needs in a board to manage the growth of our projects and to improve our operations.

I’ve been thinking about this a lot in the past year, and I like to think I have an easy to explain answer to “what Apache needs in a board”.

We need a board to provide two things: Independent Oversight, of both projects and officers; and  Strategic Vision and Drive.

Independent Oversight

Independent oversight is the core value the ASF offers as a community hosting organization. We are a 501C3 public charity, and we rely solely on unpaid volunteers to perform all governance activities. That means that we can ensure our projects are run for the benefit of the public and the world, and not just for individual for-profit companies.

In particular, I am confident that we can maintain this corporate independence, even in the face of project and organizational growth and any potential future needs to hire more staff for operations. Our cultural history and Member ability provide oversight to both project and corporate operations mean the Membership will be able to keep us independent for the next 50 years.

Oversight of Projects: The board provides oversight to our projects. The board does this by reviewing quarterly project reports, and then only providing 1) mentoring when requested, or 2) board requests or directives only if the project is not capable of correcting problems themselves. As has been noted before: the board acts slowly by design: it gives projects a chance to self-correct before taking organizational action (ultimately by changing a PMC, in very rare cases).

Oversight of Operations: The board appoints officers to perform the daily corporate operations needed (infra, publicity, etc.), and then provides oversight to the President or those officers via monthly reports. This is a key point here: we need a board that can delegate operations to the officers, and treat them more like PMCs. That is, we need a board that respects delegation rather than micromanaging. Like PMCs, if the board sees something odd, they should request an update in the next report from the officer. If the officer can’t self-correct (like we give PMCs a chance to do), only then should the board step in, with specific directives to make changes.

Strategic Vision And Drive

Strategic vision and drive: the board needs to think ahead, and plan in broad strokes where we’d like to see the ASF be in 5 years, and how that can best serve the needs of both our project contributors, our users in general, and the volunteers and staff who perform our corporate operations.

We’re incredibly lucky to have director candidates with broad experience, strong viewpoints, and a willingness to volunteer their time for the position. We need a board that can take this experience to think about the big picture, and how the ASF can remain relevant, exciting, and a well-functioning organization for years to come. This includes supporting both our paid staff and our many, many volunteers with an efficient and helpful environment across our operations.

Individual Directors

Along with a good board, we need directors who can communicate clearly, professionally, and consistently. The larger world and many community members view Directors as a very specific role, and it’s clear from the feedback over the years that many outsiders (i.e. not regularly active in internal operations and governance at the ASF) see each director as being A Director in their emails.

As we rely on volunteers both for project work and governance, we need directors who can keep their messages clear, consistent, and always remember what audience they are speaking to. That includes both mentoring/overseeing project communities; reviewing officers or operational areas; or in public in general when speaking about the ASF.

We also need at least some Directors willing to serve as public spokespersons for the ASF. In many cases, Sponsors and the press/analysts expect to speak to someone with  A Senior Title, like Director or President. While the Apache Way minimizes the importance of titles inside of our communities, the reality in the real world is that titles matter to many other people.

For those folks interested in the nitty-gritty details of how the ASF elects its board, you can read about the STV tools we use from the Apache STeVe project.

The post What Apache needs in a Board appeared first on Community Over Code.

Categories: FLOSS Project Planets

BangPypers: IoT Workshop - Mar, 2017

Planet Python - Sat, 2017-03-18 11:03

For this month (March), we conducted an IOT workshop hosted and presented by Sudhir Rawat and Zeeshan at Microsoft.

The strength of attendance was around 75 people, more than 50% of those who had RSVP-ed which was a good indication of interest.

The first hour was spent in setting up the wifi connection for attendees and providing them with Azure credentials and distributing the Raspberry Pi kits (1 device per table was allotted - for 10 tables). Sudhir and Zeeshan had already arranged for these to be distributed efficiently, and that helped ease the whole process. Attendees also introduced themselves which allowed for a demographic expectation to be set - most people wanted an introductory hands-on experience with IOT.

Once the logistics was out of the way, the IOT Hub setup, storage configuration etc on the Azure platform was taught. The Pi devices provided had been pre-loaded with Raspbian and Python programs to communicate with the device connected. The 3 main tasks that this workshop focused on was, observing how the fingerprint sensor connected to pin 3 (for our setup) of the device relayed an acknowledgement upon being touched, creating a new device on the IOTHub and sending messages using the device to the Azure Analytics module in the Hub using Python.

The talk was well recieved and the feedback was overall very positive, requesting that a second session be conducted to continue the pattern and material of this workshop.

See you all at the next meetup! :)

PS -

The code used in the meetup can be found at Sudhir's Github link.

Block diagram as presented -

Categories: FLOSS Project Planets

Nick Kew: Equinox

Planet Apache - Sat, 2017-03-18 09:44

Just noticed:  Sunrise 06:25 Sunset 18:26.  Starting today, we are into the season of daylight!

We’ve had some spring weather too, though nothing dramatic.  What is looking impressive is the wide range of spring flowers and blossom all around.  Not just the Usual Suspects like daffodils and primroses, but even later flowers like the tulips in the front garden are peeping through.  And we have the appearance of other spring wildlife, like the bumblebees servicing the flowers in the garden.

Also mildly bemused by the white heather at the bottom of the garden.  I’ve seen heather ranging from red/pink through to blueish, but pure white is new to me.

Categories: FLOSS Project Planets

Vincent Sanders: A rose by any other name would smell as sweet

Planet Debian - Sat, 2017-03-18 09:01
Often I end up dealing with code that works but might not be of the highest quality. While quality is subjective I like to use the idea of "code smell" to convey what I mean, these are a list of indicators that, in total, help to identify code that might benefit from some improvement.

Such smells may include:
  • Complex code lacking comments on intended operation
  • Code lacking API documentation comments especially for interfaces used outside the local module
  • Not following style guide
  • Inconsistent style
  • Inconsistent indentation
  • Poorly structured code
  • Overly long functions
  • Excessive use of pre-processor
  • Many nested loops and control flow clauses
  • Excessive numbers of parameters
I am most certainly not alone in using this approach and Fowler et al have covered this subject in the literature much better than I can here. One point I will raise though is some programmers dismiss code that exhibits these traits as "legacy" and immediately suggest a fresh implementation. There are varying opinions on when a rewrite is the appropriate solution from never to always but in my experience making the old working code smell nice is almost always less effort and risk than a re-write.
TestsWhen I come across smelly code, and I decide it is worthwhile improving it, I often discover the biggest smell is lack of test coverage. Now do remember this is just one code smell and on its own might not be indicative, my experience is smelly code seldom has effective test coverage while fresh code often does.

Test coverage is generally understood to be the percentage of source code lines and decision paths used when instrumented code is exercised by a set of tests. Like many metrics developer tools produce, "coverage percentage" is often misused by managers as a proxy for code quality. Both Fowler and Marick have written about this but sufficient to say that for a developer test coverage is a useful tool but should not be misapplied.

Although refactoring without tests is possible the chances for unintended consequences are proportionally higher. I often approach such a refactor by enumerating all the callers and constructing a description of the used interface beforehand and check that that interface is not broken by the refactor. At which point is is probably worth writing a unit test to automate the checks.

Because of this I have changed my approach to such refactoring to start by ensuring there is at least basic API code coverage. This may not yield the fashionable 85% coverage target but is useful and may be extended later if desired.

It is widely known and equally widely ignored that for maximum effectiveness unit tests must be run frequently and developers take action to rectify failures promptly. A test that is not being run or acted upon is a waste of resources both to implement and maintain which might be better spent elsewhere.

For projects I contribute to frequently I try to ensure that the CI system is running the coverage target, and hence the unit tests, which automatically ensures any test breaking changes will be highlighted promptly. I believe the slight extra overhead of executing the instrumented tests is repaid by having the coverage metrics available to the developers to aid in spotting areas with inadequate tests.
ExampleA short example will help illustrate my point. When a web browser receives an object over HTTP the server can supply a MIME type in a content-type header that helps the browser interpret the resource. However this meta-data is often problematic (sorry that should read "a misleading lie") so the actual content must be examined to get a better answer for the user. This is known as mime sniffing and of course there is a living specification.

The source code that provides this API (Linked to it rather than included for brevity) has a few smells:
  • Very few comments of any type
  • The API are not all well documented in its header
  • A lot of global context
  • Local static strings which should be in the global string table
  • Pre-processor use
  • Several long functions
  • Exposed API has many parameters
  • Exposed API uses complex objects
  • The git log shows the code has not been significantly updated since its implementation in 2011 but the spec has.
  • No test coverage
While some of these are obvious the non-use of the global string table and the API complexity needed detailed knowledge of the codebase, just to highlight how subjective the sniff test can be. There is also one huge air freshener in all of this which definitely comes from experience and that is the modules author. Their name at the top of this would ordinarily be cause for me to move on, but I needed an example!

First thing to check is the API use

$ git grep -i -e mimesniff_compute_effective_type --or -e mimesniff_init --or -e mimesniff_fini
content/hlcache.c: error = mimesniff_compute_effective_type(handle, NULL, 0,
content/hlcache.c: error = mimesniff_compute_effective_type(handle,
content/hlcache.c: error = mimesniff_compute_effective_type(handle,
content/mimesniff.c:nserror mimesniff_init(void)
content/mimesniff.c:void mimesniff_fini(void)
content/mimesniff.c:nserror mimesniff_compute_effective_type(llcache_handle *handle,
content/mimesniff.h:nserror mimesniff_compute_effective_type(struct llcache_handle *handle,
content/mimesniff.h:nserror mimesniff_init(void);
content/mimesniff.h:void mimesniff_fini(void);
desktop/netsurf.c: ret = mimesniff_init();
desktop/netsurf.c: mimesniff_fini();

This immediately shows me that this API is used in only a very small area, this is often not the case but the general approach still applies.

After a little investigation the usage is effectively that the mimesniff_init API must be called before the mimesniff_compute_effective_type API and the mimesniff_fini releases the initialised resources.

A simple test case was added to cover the API, this exercised the behaviour both when the init was called before the computation and not. Also some simple tests for a limited number of well behaved inputs.

By changing to using the global string table the initialisation and finalisation API can be removed altogether along with a large amount of global context and pre-processor macros. This single change removes a lot of smell from the module and raises test coverage both because the global string table already has good coverage and because there are now many fewer lines and conditionals to check in the mimesniff module.

I stopped the refactor at this point but were this more than an example I probably would have:
  • made the compute_effective_type interface simpler with fewer, simpler parameters
  • ensured a solid set of test inputs
  • examined using a fuzzer to get a better test corpus.
  • added documentation comments
  • updated the implementation to 2017 specification.
ConclusionThe approach examined here reduce the smell of code in an incremental, testable way to improve the codebase going forward. This is mainly necessary on larger complex codebases where technical debt and bit-rot are real issues that can quickly overwhelm a codebase if not kept in check.

This technique is subjective but helps a programmer to quantify and examine a piece of code in a structured fashion. However it is only a tool and should not be over applied nor used as a metric to proxy for code quality.
Categories: FLOSS Project Planets
Syndicate content