FLOSS Project Planets

Deeson: Thoughts on certification from the UK’s 1st Drupal 8 Grand Master

Planet Drupal - Thu, 2017-09-14 04:16

As Drupal specialists, we’re proud to be the largest Acquia Certified team in Europe. And last month our development manager Mark Pavlitski became the first person in the UK to achieve Drupal 8 Grand Master status!

This special recognition is awarded to best of the best Drupal Developers, and requires the participant to pass three exams: Acquia Certified Developer, Back-end Specialist, and Front-end Specialist.

In this post, Mark shares his insights into the certification process.

The exams

I started with the Drupal 8 Developer test, which is more general than the subsequent two, and covers Drupal site building, theming, module development and fundamental web concepts. 

Then I sat the Drupal 8 Front-end Specialist exam which, as the name implies, is focussed on front-end development and Drupal theming concepts. I found this the most challenging of the three, having had more back-end experience. But most of the questions are written in a way that will be familiar to an experienced Drupal developer.

Finally, I sat the Drupal 8 Back end specialist exam. I found this one more straightforward, given my experience, though still challenging at times.

My tips for other developers

Acquia’s certification tests take place on Kryterion’s WebAssessor platform. Officially it supports Macs, but I found I had various issues with the software. Although their support was very helpful, I ended up switching to a Windows laptop to take the tests.

All of the questions are scenario based, describing a Drupal development problem with multiple choice answers. There were a couple of typos and one or two ambiguously worded questions, but overall the tests are presented in a way that will make sense to any seasoned Drupal developer.

The results

I was pleasantly surprised by how quickly the test results appear on the Acquia certification registry. The test portal says the results will take a couple of weeks to appear, but in my case it was as quick as a few hours.

Overall my experience with the Acquia Certification programme was great. The tests were well structured, and challenging but not confusing. I’d definitely recommend certification as a way for Drupal businesses and professionals to validate their skills and experience.

Want to be part of the largest Acquia Certified team in Europe and get paid time to support your open source projects? We’re hiring.

Categories: FLOSS Project Planets

Python Bytes: #43 Python string theory, v2

Planet Python - Thu, 2017-09-14 04:00
<h1>Python Bytes 43</h1> <p>This episode is brought to you by Rollbar: <a href="https://pythonbytes.fm/rollbar"><strong>pythonbytes.fm/rollbar</strong></a></p> <p><strong>Brian #1:</strong> <a href="https://github.com/asottile/future-fstrings"><strong>future-fstrings</strong></a></p> <ul> <li>A backport of fstrings to python &lt; 3.6</li> <li>Include an encoding string the top of your file (this replaces the utf-8 line if you already have it)</li> <li>And then write python3.6 fstring code as usual!</li> </ul> <pre><code> # -*- coding: future_fstrings -*- thing = 'world' print(f'hello {thing}') </code></pre> <ul> <li>In action:</li> </ul> <pre><code> $ python2.7 main.py hello world </code></pre> <ul> <li>I’m still undecided if I like this sort of monkeying with the language through the encoding mechanism back door. </li> </ul> <p><strong>Michael #2:</strong> <a href="https://www.youtube.com/watch?v=js_0wjzuMfc"><strong>The Fun of Reinvention</strong></a></p> <ul> <li>Keynote from PyCon Israel</li> <li>David Beazley rocks it again</li> <li>Let’s take Python 3.6 features and see how far we can push them</li> <li>Builds an aspect-oriented constraint system using just 3.6 features</li> </ul> <p><strong>Brian #3:</strong> <a href="https://medium.com/@almeidneto/sound-pattern-recognition-with-python-9aff69edce5d"><strong>Sound Pattern Recognition with Python</strong></a></p> <ul> <li>Using<code>scipy.io.wavfile.read</code> to read a .wav file.</li> <li>Looking for peaks (knocks).</li> <li>Using minimum values to classify peaks, and minimum distance between peaks.</li> <li>This is an interesting start into audio measurements using Python.</li> <li>Would be fun to extend to some basic scope measurements, like sampling with a resolution bandwidth, trigger thresholds, pre-trigger time guards, etc.</li> </ul> <p><strong>Michael #4:</strong> <a href="https://www.python.org/dev/peps/pep-0550/"><strong>PEP 550: Execution Context</strong></a></p> <ul> <li>From the guys at <a href="http://magic.io"><strong>magic.io</strong></a></li> <li>Adds a new generic mechanism of ensuring consistent access to non-local state in the context of out-of-order execution, such as in Python generators and coroutines.</li> <li>Thread-local storage, such as <code>threading.local()</code>, is inadequate for programs that execute concurrently in the same OS thread. This PEP proposes a solution to this problem.</li> <li>A few examples of where Thread-local storage (TLS) is commonly relied upon: <ul> <li>Context managers like decimal contexts,<code>numpy.errstate</code>, and <code>warnings.catch_warnings</code>.</li> <li>Request-related data, such as security tokens and request data in web applications, language context for<code>gettext</code> etc.</li> <li>Profiling, tracing, and logging in large code bases.</li> </ul></li> <li>The motivation from <a href="https://github.com/magicstack/uvloop"><strong>uvloop</strong></a> is obviously at work here.</li> </ul> <p><strong>Brian #5:</strong> <a href="https://medium.com/@bfortuner/python-multithreading-vs-multiprocessing-73072ce5600b"><strong>Intro to Threads and Processes in Python</strong></a></p> <ul> <li>Beginner’s guide to parallel programming</li> <li>Threads and processes are both useful for different kinds of problems.</li> <li>This is a good quick explanation of when and where to use either. With pictures!</li> <li>Threads <ul> <li>Like mini processes that live inside one process.</li> <li>Share mem space with other threads.</li> <li>Cannot run simultaneously in Python (there are some workarounds), due to GIL.</li> <li>Good for tasks waiting on IO.</li> </ul></li> <li>Processes <ul> <li>Controlled by OS</li> <li>Can run simultaneously</li> <li>Good for CPU intensive work because you can use multiple cores.</li> </ul></li> </ul> <p><strong>Michael #6:</strong> <a href="https://www.pyfilesystem.org/"><strong>Alternative filesystems for Python</strong></a></p> <ul> <li>PyFilesystem: Filesystem Abstraction for Python. </li> <li>Work with files and directories in archives, memory, the cloud etc. as easily as your local drive.</li> <li>Uses <ul> <li>Write code now, decide later where the data will be stored</li> <li>unit test without writing real files</li> <li>upload files to the cloud without learning a new API</li> <li>sandbox your file writing code</li> </ul></li> <li>File system backends <ul> <li><a href="https://www.pyfilesystem.org/page/appfs/">AppFS</a> Filesystems for application data.</li> <li><a href="https://www.pyfilesystem.org/page/s3fs/">S3FS</a> Amazon S3 Filesystem.</li> <li><a href="https://www.pyfilesystem.org/page/ftpfs/">FTPFS</a> File Transfer Protocol.</li> <li><a href="https://www.pyfilesystem.org/page/memoryfs/">MemoryFS</a> An in-memory filesystem.</li> <li><a href="https://www.pyfilesystem.org/page/mountfs/">MountFS</a> A virtual filesystem that can <em>mount</em> other filesystems.</li> <li><a href="https://www.pyfilesystem.org/page/multifs/">MultiFS</a> A virtual filesystem that combines other filesystems.</li> <li><a href="https://www.pyfilesystem.org/page/osfs/">OSFS</a> OS Filesystem (hard-drive).</li> <li><a href="https://www.pyfilesystem.org/page/tarfs/">TarFS</a> Read and write compressed Tar archives.</li> <li><a href="https://www.pyfilesystem.org/page/tempfs/">TempFS</a> Contains temporary data.</li> <li><a href="https://www.pyfilesystem.org/page/zipfs/">ZipFS</a> Read and write Zip files.</li> <li>and more</li> </ul></li> </ul> <h2>Our news</h2> <p>Michael: switch statement extension to Python: <a href="https://github.com/mikeckennedy/python-switch"><strong>github.com/mikeckennedy/python-switch</strong></a></p>
Categories: FLOSS Project Planets

Talk Python to Me: #129 Falcon: The bare-metal Python web framework

Planet Python - Thu, 2017-09-14 04:00
Full featured web frameworks such as Django are great. But sometimes, living closer to the network layer is just the thing you need.
Categories: FLOSS Project Planets

pgcli: Release v1.8.0

Planet Python - Thu, 2017-09-14 03:00

Pgcli is a command line interface for Postgres database that does auto-completion and syntax highlighting. You can install this version using:

$ pip install -U pgcli Features:
  • Add fish-style auto-suggestion from history. (Thanks: Amjith Ramanujam)
  • Improved formatting of arrays in output (Thanks: Joakim Koljonen)
  • Don't quote identifiers that are non-reserved keywords. (Thanks: Joakim Koljonen)
  • Remove the ... in the continuation prompt and use empty space instead. (Thanks: Amjith Ramanujam)
  • Add conninfo and handle more parameters with c (issue #716) (Thanks: François Pietka)
Internal changes:
  • Preliminary work for a future change in outputting results that uses less memory. (Thanks: Dick Marinus)
  • Remove import workaround for OrderedDict, required for python < 2.7. (Thanks: Andrew Speed)
  • Use less memory when formatting results for display (Thanks: Dick Marinus).
  • Port auto_vertical feature test from mycli to pgcli. (Thanks: Dick Marinus)
  • Drop wcwidth dependency (Thanks: Dick Marinus)
Bug Fixes:
  • Fix the way we get host when using DSN (issue #765) (Thanks: François Pietka)
  • Add missing keyword COLUMN after DROP (issue #769) (Thanks: François Pietka)
  • Don't include arguments in function suggestions for backslash commands (Thanks: Joakim Koljonen)
  • Optionally use POSTGRES_USER, POSTGRES_HOST POSTGRES_PASSWORD from environment (Thanks: Dick Marinus)
Categories: FLOSS Project Planets

James McCoy: devscripts needs YOU!

Planet Debian - Wed, 2017-09-13 23:18

Over the past 10 years, I've been a member of a dwindling team of people maintaining the devscripts package in Debian.

Nearly two years ago, I sent out a "Request For Help" since it was clear I didn't have adequate time to keep driving the maintenance.

In the mean time, Jonas split licensecheck out into its own project and took over development. Osamu has taken on much of the maintenance for uscan, uupdate, and mk-origtargz.

Although that has helped spread the maintenance costs, there's still a lot that I haven't had time to address.

Since Debian is still fairly early in the development cycle for Buster, I've decided this is as good a time as any for me to officially step down from active involvement in devscripts. I'm willing to keep moderating the mailing list and other related administrivia (which is fairly minimal given the repo is part of collab-maint), but I'll be unsubscribing from all other notifications.

I think devscripts serves as a good funnel for useful scripts to get in front of Debian (and its derivatives) developers, but Jonas may also be onto something by pulling scripts out to stand on their own. One of the troubles with "bucket" packages like devscripts is the lack of visibility into when to retire scripts. Breaking scripts out on their own, and possibly creating multiple binary packages, certainly helps with that. Maybe uscan and friends would be a good next candidate.

At the end of the day, I've certainly enjoyed being able to play my role in helping simplify the life of all the people contributing to Debian. I may come back to it some day, but for now it's time to let someone else pick up the reins.

If you're interested in helping out, you can join #devscripts on OFTC and/or send a mail to <devscripts-devel@lists.alioth.debian.org>.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppMsgPack 0.2.0

Planet Debian - Wed, 2017-09-13 21:28

A new and much enhanced version of RcppMsgPack arrived on CRAN a couple of days ago. It came together following this email to the r-package-devel list which made it apparent that Travers Ching had been working on MessagePack converters for R which required the very headers I had for use from, inter alia, the RcppRedis package.

So we joined our packages. I updated the headers in RcppMsgPack to the current upstream version 2.1.5 of MessagePack, and Travers added his helper functions allow direct packing / unpacking of MessagePack objects at the R level, as well as tests and a draft vignette. Very exciting, and great to have a coauthor!

So now RcppMspPack provides R with both MessagePack header files for use via C++ (or C, if you must) packages such as RcppRedis --- and direct conversion routines at the R prompt.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.

Changes in version 0.2.0 (2017-09-07)
  • Added support for building on Windows

  • Upgraded to MsgPack 2.1.5 (#3)

  • New R functions to manipulate MsgPack objects: msgpack_format, msgpack_map, msgpack_pack, msgpack_simplify, mgspack_unpack (#4)

  • New R functions also available as msgpackFormat, msgpackMap, msgpackPack, msgpackSimplify, mgspackUnpack (#4)

  • New vignette (#4)

  • New tests (#4)

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

More information may be on the RcppMsgPack page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppRedis 0.1.8

Planet Debian - Wed, 2017-09-13 21:26

A new minor release of RcppRedis arrived on CRAN last week, following the release 0.2.0 of RcppMsgPack which brought the MsgPack headers forward to release 2.1.5. This required a minor and rather trivial change in the code. When the optional RcppMsgPack package is used, we now require this version 0.2.0 or later.

We made a few internal updates to the package as well.

Changes in version 0.1.8 (2017-09-08)
  • A new file init.c was added with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Symbol registration is enabled in useDynLib

  • Travis CI was updated to using run.sh

  • The (optional MessagePack) code was updated for MsgPack 2.*

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Aaron Morton: Phantom Consistency Mechanisms

Planet Apache - Wed, 2017-09-13 20:00

In this blog post we will take a look at consistency mechanisms in Apache Cassandra. There are three reasonably well documented features serving this purpose:

  • Read repair gives the option to sync data on read requests.
  • Hinted handoff is a buffering mechanism for situations when nodes are temporarily unavailable.
  • Anti-entropy repair (or simply just repair) is a process of synchronizing data across the board.

What is far less known, and what we will explore in detail in this post, is a fourth mechanism Apache Cassandra uses to ensure data consistency. We are going to see Cassandra perform another flavour of read repairs but in far sneakier way.

Setting things up

In order to see this sneaky repair happening, we need to orchestrate a few things. Let’s just blaze through some initial setup using Cassandra Cluster Manager (ccm - available on github).

# create a cluster of 2x3 nodes ccm create sneaky-repair -v 2.1.15 ccm updateconf 'num_tokens: 32' ccm populate --vnodes -n 3:3 # start nodes in one DC only ccm node1 start --wait-for-binary-proto ccm node2 start --wait-for-binary-proto ccm node3 start --wait-for-binary-proto # create table and keypsace ccm node1 cqlsh -e "CREATE KEYSPACE sneaky WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': 3};" ccm node1 cqlsh -e "CREATE TABLE sneaky.repair (k TEXT PRIMARY KEY , v TEXT);" # insert some data ccm node1 cqlsh -e "INSERT INTO sneaky.repair (k, v) VALUES ('firstKey', 'firstValue');" The familiar situation

At this point, we have a cluster up and running. Suddenly, “the requirements change” and we need to expand the cluster by adding one more data center. So we will do just that and observe what happens to the consistency of our data.

Before we proceed, we need to ensure some determinism and turn off Cassandra’s known consistency mechanisms (we will not be disabling anti-entropy repair as that process must be initiated by an operator anyway):

# disable hinted handoff ccm node1 nodetool disablehandoff ccm node2 nodetool disablehandoff ccm node3 nodetool disablehandoff # disable read repairs ccm node1 cqlsh -e "ALTER TABLE sneaky.repair WITH read_repair_chance = 0.0 AND dclocal_read_repair_chance = 0.0"

Now we expand the cluster:

# start nodes ccm node4 start --wait-for-binary-proto ccm node5 start --wait-for-binary-proto ccm node6 start --wait-for-binary-proto # alter keyspace ccm node1 cqlsh -e "ALTER KEYSPACE sneaky WITH replication ={'class': 'NetworkTopologyStrategy', 'dc1': 3, 'dc2':3 };"

With these commands, we have effectively added a new DC into the cluster. From this point, Cassandra can start using the new DC to serve client requests. However, there is a catch. We have not populated the new nodes with data. Typically, we would do a nodetool rebuild. For this blog post we will skip that, because this situation allows some sneakiness to be observed.

Sneakiness: blocking read repairs

Without any data being put on the new nodes, we can expect no data to be actually readable from the new DC. We will go to one of the new nodes (node4) and do a read request with LOCAL_QUORUM consistency to ensure only the new DC participates in the request. After the read request we will also check the read repair statistics from nodetool, but we will set that information aside for later:

ccm node4 cqlsh -e "CONSISTENCY LOCAL_QUORUM; SELECT * FROM sneaky.repair WHERE k ='firstKey';" ccm node4 nodetool netstats | grep -A 3 "Read Repair" k | v ---+--- (0 rows)

No rows are returned as expected. Now, let’s do another read request (again from node4), this time involving at least one replica from the old DC thanks to QUORUM consistency:

ccm node4 cqlsh -e "CONSISTENCY QUORUM; SELECT * FROM sneaky.repair WHERE k ='firstKey';" ccm node4 nodetool netstats | grep -A 3 "Read Repair" k | v ----------+------------ firstKey | firstValue (1 rows)

We now got a hit! This is quite unexpected because we did not run rebuild or repair meanwhile and hinted handoff and read repairs have been disabled. How come Cassandra went ahead and fixed our data anyway?

In order to shed some light onto this issue, let’s examine the nodetool netstat output from before. We should see something like this:

# after first SELECT using LOCAL_QUORUM ccm node4 nodetool netstats | grep -A 3 "Read Repair" Read Repair Statistics: Attempted: 0 Mismatch (Blocking): 0 Mismatch (Background): 0 # after second SELECT using QUORUM ccm node4 nodetool netstats | grep -A 3 "Read Repair" Read Repair Statistics: Attempted: 0 Mismatch (Blocking): 1 Mismatch (Background): 0 # after third SELECT using LOCAL_QUORUM ccm node4 nodetool netstats | grep -A 3 "Read Repair" Read Repair Statistics: Attempted: 0 Mismatch (Blocking): 1 Mismatch (Background): 0

From this output we can tell that:

  • No read repairs happened (Attempted is 0).
  • One blocking read repair actually did happen (Mismatch (Blocking) is 1).
  • No background read repair happened (Mismatch (Background) is 0).

It turns out there are two read repairs that can happen:

  • A blocking read repair happens when a query can not complete with desired consistency level without actually repairing the data. read_repair_chance has no impact on this.
  • A background read repair happens in situations when a query succeeds but inconsistencies are found. This happens with read_repair_chance probability.
The take-away

To sum things up, it is not possible to entirely disable read repairs and Cassandra will sometimes try to fix inconsistent data for us. While this is pretty convenient, it also has some inconvenient implications. The best way to avoid any surprises is to keep the data consistent by running regular repairs.

In situations featuring non-negligible amounts of inconsistent data this sneakiness can cause a lot of unexpected load on the nodes, as well as the cross-DC network links. Having to do cross-DC reads can also introduce additional latency. Read-heavy workloads and workloads with large partitions are particularly susceptible to problems caused by blocking read repair.

A particular situation when a lot of inconsistent data is guaranteed happens when a new data center gets added to the cluster. In these situations, LOCAL_QUORUM is necessary to avoid doing blocking repairs until a rebuild or a full repair is done. Using a LOCAL_QUORUM is twice as important when the data center expansion happens for the first time. In one data center scenario QUORUM and LOCAL_QUORUM have virtually the same semantics and it is easy to forget which one is actually used.

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-09-13

Planet Apache - Wed, 2017-09-13 19:58
Categories: FLOSS Project Planets

Accessibility improvements in Randa

Planet KDE - Wed, 2017-09-13 17:38

Accessibility in KDE and Qt is constantly improving. Sometimes a change of scenery helps focusing and brings up productivity. Mix that with a bunch of great people and good things will start happening. It has been possible to create accessible Qt applications on Linux for a while, but of course not everything will just work out of the box untested. A while back Mario asked me to join this year’s Randa Meeting where KDE people discuss and fix issues. It turns out that was a great idea. I haven’t been able to focus much on applications and user experience lately, but with this backdrop it works ��

Upon arrival I sat down with Marco and David of Plasma fame and we quickly got Orca (the screen reader on Linux) working on their laptops. We discussed what is wrong and where improvements would be most needed for blind users to use the Plasma desktop, we got to work and had the first fixes even before lunch, wow. Adding a few accessibility hints and poking hard at keyboard navigation got us much further.

This means that KRunner – Plasma Desktop’s app launcher – is now much more accessible, a great enabler. There’s more work going into the menu and panel, hopefully we’ll see much improved keyboard usability for these by the end of the week. It’s good to see how Qt accessibility works nicely with Qt Quick.

In the afternoon, we had a round of introductions and talks. For accessibility some key points were:
– Don’t do custom stuff if you can avoid it. This includes colors and fonts, but also focus handling.
– Try running your application with keyboard only. And then mouse only.
– Make sure that focus handling actually works, then test with a screen reader.
– Oh, and focus handling. Check the order of your tab focus chain (in Qt Designer or by running your Qt Quick application. While I’ve lately become a big fan of the Qt Quick Designer, I don’t think it allows to detect all corner cases when it comes to tab key handling yet.)
– I should write and talk more about FocusScope, one our our best and most confusing one bit friends in Qt Quick.

I sat down to poke at the systemsettings module, making it easier to debug and a bit more reliable. This morning I sat down with Ade to poke at why Calamares (the installer framework with the highest squid content ever) is not accessible. When running it as a regular user, the results were actually all in all quite OK. But it usually (for historical reasons) gets launched as root. While that may be fixed eventually, it was worth investigating what the issue really is, because, well, it should work. After a bit of poking, comparing DBus messages and then a break (Yoga and Acrobatics), we spotted that an old bug hadn’t actually been really fixed, just almost. Applications run as root would connect to the right DBus bus (AT-SPI2 runs its own bus), but we just failed to actually initialize things properly. Turns out that in this slightly different code path, we’d emit a signal in a constructor, before it was connected to the receiver… oops. The fix is making its way into Qt as we speak, so everything is looking and sounding good ��

[Note: the blog post was written yesterday, but I never got around to publishing it, so everything is off by a day, just shift it back by one day. Then you can imagine how I will sit down the next evening for a bit of late night blogging and pancakes (that’s about now).]


Help the Randa Meeting and other sprints!

The post Accessibility improvements in Randa appeared first on Qt Blog.

Categories: FLOSS Project Planets

Last week development in Elisa

Planet KDE - Wed, 2017-09-13 16:29

I have decided to try to publish a short or not too short blog post each week some development happen in Elisa Git repository. I am inspired amongst others by the current posts about development of Kube.

I have updated the wiki page bout Elisa to include howto build instructions Elisa. Please have a look and improve them if you can.

The following items have been pushed:

  • A fix for memory leak when modifying the paths to be indexed by the Elisa files indexer ;
  • Do not display the disc number in play list when the track is from an album with a single disc.

I am still working on the notifications and a small progress has been made for the integration of visualizations when playing music.


Categories: FLOSS Project Planets

Python Anywhere: The PythonAnywhere newsletter, September 2017

Planet Python - Wed, 2017-09-13 13:59

Gosh, and we were doing so well. After managing a record seven of our "monthly" newsletters back in 2016, it's mid-September and we haven't sent a single one so far this year :-( Well, better late than never! Let's see what's been going on.

The PythonAnywhere API

Our API is now in public beta! Just go to the "API token" tab on the "Account" page to generate a token and get started.

You can do lots with it already:

  • Create, reload and reconfigure websites: our very own Harry has written a neat script that allows you to create a completely new Django website, with a virtualenv, using it.
  • Get links to share files from your PythonAnywhere file storage with other people
  • List your consoles, and close them.

We're planning to add API support for creating, modifying and deleting scheduled tasks very soon.

Full documentation is here. We'd love your feedback and any suggestions about what we need to add to it. Just drop us a line at support@pythonanywhere.com.

Other nifty new stuff, part 1

You might have noticed something new in that description of the API calls. You might have asked yourself "what's all this about sharing files? I don't remember anything about that."

You're quite right -- it's a new thing, you can now generate a sharing link for any file from inside the PythonAnywhere editor. Send the link to someone else, and they'll get a page allowing them to copy it into their own account. Let us know if you find it useful :-)

Other nifty stuff, part 2

Of course, no Python developer worth their salt would ever consider using an old version of the language. In particular, we definitely don't have any bits of Python 2.7 lurking in our codebase. Definitely not. Nope.

Anyway, adding Python 3.6 support was super-high priority for us -- and it went live earlier on this year.

One important thing -- it's only supported in our "dangermouse" system image. If your account was created in the last year, you're already using dangermouse, so you'll already have it. But if your account is older, and you haven't switched over yet, maybe it's time? Just drop us a line.

The inside scoop from the blog and the forums Some new help pages

A couple of new pages from our ever-expanding collection:

New modules

Although you can install Python packages on PythonAnywhere yourself, we like to make sure that we have plenty of batteries included.

We haven't installed any new system modules for Python 2.7, 3.3, 3.4 or 3.5 recently -- but we have installed everything we thought might be useful as part of our Python 3.6 install :-)

New whitelisted sites

Paying PythonAnywhere customers get unrestricted Internet access, but if you're a free PythonAnywhere user, you may have hit problems when writing code that tries to access sites elsewhere on the Internet. We have to restrict you to sites on a whitelist to stop hackers from creating dummy accounts to hide their identities when breaking into other people's websites.

But we really do encourage you to suggest new sites that should be on the whitelist. Our rule is, if it's got an official public API, which means that the site's owners are encouraging automated access to their server, then we'll whitelist it. Just drop us a line with a link to the API docs.

We've added too many sites to list since our last newsletter to list them all -- but please keep them coming!

That's all for now

That's all we've got this time around. We have some big new features in the pipeline, so keep tuned! Maybe we'll even get our next newsletter out in October :-)

Categories: FLOSS Project Planets

FSF Blogs: Only a short time left to pre-order the Talos II; pre-orders end September 15th

GNU Planet! - Wed, 2017-09-13 13:42

We wrote previously about why you should support the Talos II from Raptor Engineering. The pre-order period for the Talos II is almost over. Making a pre-order will help them to launch this much-needed system. The goal for the folks at Raptor Engineering has always been to gain Respects Your Freedom certification. We certified a lot of new devices this year, and if we want to keep seeing those numbers increase, then it is critical that we support projects like this. As we said in our last post:

The unfortunate reality is that x86 computers come encumbered with built-in low-level backdoors like the Intel Management Engine, as well as proprietary boot firmware. This means that users can't gain full control over their computers, even if they install a free operating system.

While people are currently working to overcome the Intel Management Engine problem, each new generation of Intel CPUs is a new problem. Even if the community succeeds fully with one generation, it has to start over with the next one. This is precisely why the Talos II is important. As we said previously:

For the future of free computing, we need to build and support systems that do not come with such malware pre-installed, and the Power9-based Talos II promises to be a great example of just such a system. Devices like this are the future of computing that Respects Your Freedom.

You should help make the Talos II a success by making a pre-order by September 15th. The FSF Licensing & Compliance Lab will have to do another evaluation once it is actually produced to be sure it meets our certification standards, but we have high hopes. Here is what you can do to help:

Categories: FLOSS Project Planets

Only a short time left to pre-order the Talos II; pre-orders end September 15th

FSF Blogs - Wed, 2017-09-13 13:42

We wrote previously about why you should support the Talos II from Raptor Engineering. The pre-order period for the Talos II is almost over. Making a pre-order will help them to launch this much-needed system. The goal for the folks at Raptor Engineering has always been to gain Respects Your Freedom certification. We certified a lot of new devices this year, and if we want to keep seeing those numbers increase, then it is critical that we support projects like this. As we said in our last post:

The unfortunate reality is that x86 computers come encumbered with built-in low-level backdoors like the Intel Management Engine, as well as proprietary boot firmware. This means that users can't gain full control over their computers, even if they install a free operating system.

While people are currently working to overcome the Intel Management Engine problem, each new generation of Intel CPUs is a new problem. Even if the community succeeds fully with one generation, it has to start over with the next one. This is precisely why the Talos II is important. As we said previously:

For the future of free computing, we need to build and support systems that do not come with such malware pre-installed, and the Power9-based Talos II promises to be a great example of just such a system. Devices like this are the future of computing that Respects Your Freedom.

You should help make the Talos II a success by making a pre-order by September 15th. The FSF Licensing & Compliance Lab will have to do another evaluation once it is actually produced to be sure it meets our certification standards, but we have high hopes. Here is what you can do to help:

Categories: FLOSS Project Planets

Deeson: The slow but timely death of user 1

Planet Drupal - Wed, 2017-09-13 12:26

Change is hard, but sometimes it's also for the better.

All platforms have their issues, and Drupal is no different. These quirks, known as Drupalisms, can be the source of many WTF moments for developers as the code or functionality does not work in a way they expected.

As Drupal leaves the island of doing things in its own way, one of the stowaways still onboard is user 1.

User 1 is the first Drupal user on a Drupal site with the user ID number of 1. User 1 is hardcoded to have all permissions; their access cannot be controlled through the administration interface. User 1 has all the site keys and has to be dealt with uniquely in code.

It’s time for us to kill user 1. 

In its place, all users will be treated in the same way using the standard roles and permissions model.

Key benefits

There are several benefits, some of them rather major:

Security improvement: Once a site has been built or has proper roles defined, you can take away the admin role from all users. This ensures there are no accounts that put your entire website at risk should they be compromised.

Code stability: I had to fix a few dozen tests because they relied on user 1 being special. The tests were not functioning meaning they were not actually covering the code they should have. Removing the UID1 Drupalism will ensure our tests need to run with the right permissions defined.

Consistency: What good is an access layer if there is a special exception that can bypass everything? An example of this being a downside is a bunch of administrative local tasks (tabs) or actions ("+"-icon links) being put behind sensible access checks, only to have all gazillion of them clutter the UI for user 1 because he has god-mode haxx turned on.

Reducing the number of Drupalisms: We need to distinguish between Drupalisms that define what Drupal is and those that negatively characterize Drupal by needlessly increasing its learning curve. The special case of UID1 belongs to the latter category. There are very few systems that still have god-mode accounts. And for good reason (see above items). So let's destroy yet another barrier for outside devs to join our project.

Summary

The issue to remove user 1 has been around since 2009, so the concept isn’t new. I resurrected the issue earlier this year and it seems to be building momentum now.

If this is something that interests you, then please head over to the issue queue, read the discussions and try out the patch: https://www.drupal.org/node/540008

Let’s get this into Drupal 8.5.x!

Interested in joining our team? Deeson is hiring!

Categories: FLOSS Project Planets

Kushal Das: Network isolation using NetVMs and VPN in Qubes

Planet Python - Wed, 2017-09-13 12:25

In this post, I am going to talk about the isolation of network for different domains using VPN on Qubes. The following shows the default network configuration in Qubes.

The network hardware is attached to a special domain called sys-net. This is the only domain which directly talks to the outside network. Then a domain named sys-firewall connects to sys-net and all other VMs use sys-firewall to access the outside network. These kinds of special domains are also known as NetVM as they can provide network access to other VMs.

Creating new NetVMs for VPN

The easiest way is to clone the existing sys-net domain to a new domain. In my case, I have created two different domains, mynetwork and vpn2 as new NetVMs in dom0.

$ qvm-clone sys-net mynetwork $ qvm-clone sys-net vpn2

As the next step, I have opened the settings for these VMs and marked sys-net as the NetVM for these. I have also install openvpn package in the templateVM so that both the new NetVM can find that package.

Setting up openvpn

I am not running openvpn as proper service as I want to switch to different VPN services I have access to. That also means a bit of manual work to setup the right /etc/resolv.conf file in the NetVMs and any corresponding VMs which access the network through these.

$ sudo /usr/sbin/openvpn --config connection_service_name.ovpn

So, the final network right now looks like the following diagram. The domains (where I am doing actual work) are connected into different VPN services.

Categories: FLOSS Project Planets

gnuastro @ Savannah: Gnuastro 0.4 released

GNU Planet! - Wed, 2017-09-13 12:07

I am happy to announce that the fourth release of Gnuastro now available.

GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of various command-line programs and library functions for the manipulation and analysis of astronomical data. All the programs share the same basic command-line user interface for the comfort of both the users and developers. For the full list of Gnuastro's library and programs please see the links below, respectively:

https://www.gnu.org/s/gnuastro/manual/html_node/Gnuastro-library.html
https://www.gnu.org/s/gnuastro/manual/html_node/Gnuastro-programs-list.html

The emphasis in this release has mainly been on features to improve the user experience of Gnuastro's programs. The full list of major new/changed features in this release can be seen in the NEWS file and is also appended to this announcement below [*].

Here are the compressed sources for this release:
http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.4.tar.gz (4.4MB)
http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.4.tar.lz (3.0MB)

Here are the GPG detached signatures[**]:
http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.4.tar.gz.sig
http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.4.tar.lz.sig

Use a mirror for higher download bandwidth (may need a day or two to sync):
http://ftpmirror.gnu.org/gnuastro

Here are the MD5 and SHA1 checksums:
a5d68d008ee5de9197907a35b3002988 gnuastro-0.4.tar.gz
9b79efe278645c1510444bd42e48b83f gnuastro-0.4.tar.lz
c6113658a119a9de785b04f4baceb3f7e6560360 gnuastro-0.4.tar.gz
69317d10d13ac72fdaa627a03ed77a4e307d4cb7 gnuastro-0.4.tar.lz

I am very grateful to Vladimir Markelov for contributions to the code of this release and (in alphabetical order) to Marjan Akbari, Fernando Buitrago, Adrian Bunk, Antonio Diaz Diaz, Mosè Giordano, Stephen Hamer, Raúl Infante Sainz, Aurélien Jarno, Alan Lefor, Guillaume Mahler, William Pence, Ole Streicher, Ignacio Trujillo and David Valls-Gabaud for their great suggestions, help and bug reports that made this release possible.

Gnuastro 0.4 tarball was bootstrapped (built) with the following tools:

  • Texinfo 6.4
  • Autoconf 2.69
  • Automake 1.15.1
  • Libtool 2.4.6
  • Help2man 1.47.4
  • Gnulib v0.1-1593-g9d3e8e18d
  • Autoconf Archives v2017.03.21-138-g37a7575

Note that these are not installation dependencies, for those, please see
https://www.gnu.org/software/gnuastro/manual/html_node/Dependencies.html

Cheers,
Mohammad

--
Mohammad Akhlaghi,
Postdoctoral research fellow,
Centre de Recherche Astrophysique de Lyon (CRAL),
Observatoire de Lyon. 9, Avenue Charles André,
Saint Genis Laval (69230), France.

NEWS file for this release New features
  • All programs: `.fit' is now a recognized FITS file suffix.
  • All programs: ASCII text files (tables) created with CRLF line terminators (for example text files created in MS Windows) are now also readable as input when necessary.
  • Arithmetic: now has a new `--globalhdu' (`-g') option which can be used once for all the input images.
  • MakeNoise: with the new `--sigma' (`-s') option, it is now possible to directly request the noise sigma or standard deviation. When this option is called, the `--background', `--zeropoint' and other option values will be ignored.
  • MakeProfiles: the new `--kernel' option can make a kernel image without the need to define a catalog. With this option, a catalog (or accompanying background image) must not be given.
  • MakeProfiles: the new `--pc', `--cunit' and `--ctype' options can be used to specify the PC matrix, CUNIT and CTYPE world coordinate system keywords of the output FITS file.
  • MakeProfiles: the new `distance' profile will save the radial distance of each pixel. This may be used to define your own profiles that are not currently supported in MakeProfiles.
  • MakeProfiles: with the new `--mcolisbrightness' ("mcol-is-brightness") option, the `--mcol' values of the catalog will be interpretted as total brightness (sum of pixel values), not magnitude.
  • NoiseChisel: with the new `--dilatengb' option, it is now possible to identify the connectivity of the final dilation.
  • Library: Functions that read data from an ASCII text file (`gal_txt_table_info', `gal_txt_table_read', `gal_txt_image_read') now also operate on files with CRLF line terminators.
Changed features
  • Crop: The new `--center' option is now used to define the center of a single crop. Hence the old `--ra', `--dec', `--xc', `--yc' have been removed. This new option can take multiple values (one value for each dimension). Fractions are also acceptable.
  • Crop: The new `--width' option is now used to define the width of a single crop. Hence the old `--iwidth', `--wwidth' were removed. The units to interpret the value to the option are specified by the `--mode' option. With the new `--width' option it is also possible to define a non-square crop (different widths along each dimension). In WCS mode, its units are no longer arcseconds but are the same units of the WCS (degrees for angles). `--width' can also accept fractions. So to set a width of 5 arcseconds, you can give it a value of `5/3600' for the angular dimensions.
  • Crop: The new `--coordcol' option is now used to determine the catalog columns that define coordinates. Hence the old `--racol', `--deccol', `--xcol', and `--ycol' have been removed. This new option can be called multiple times and the order of its calling will be used for the column containing the center in the respective dimension (in FITS format).
  • MakeNoise: the old `--stdadd' (`-s') option has been renamed to `--instrumental' (`-i') to be more clear.
  • MakeProfiles: The new `--naxis' and `--shift' options can take multiple values for each dimension (separated by a comma). This replaces the old `--naxis1', `--naxis2' and `--xshift' and `--yshift' options.
  • MakeProfiles: The new `--ccol' option can take the center coordinate columns of the catalog (in multiple calls) and the new `--mode' option is used to identify what standard to interpret them in (image or WCS). Together, these replace the old `--xcol', `--ycol', `--racol' and `--deccol'.
  • MakeProfiles: The new `--crpix', `--crval' and `--cdelt' options now accept multiple values separated by a comma. So they replace the old `--crpix1', `--crpix2', `--crval1', `--crval2' and `--resolution' options.
  • `gal_data_free_contents': when the input `gal_data_t' is a tile, its `array' element will not be freed. This enables safe usage of this function (and thus `gal_data_free') on tiles without worrying about the memory block associated with the tile.
  • `gal_box_bound_ellipse' is the new name for the old `gal_box_ellipse_in_box' (to be more clear and avoid repetition of the term `box'). The input position angle is now also in degrees, not radians.
  • `gal_box_overlap' now works on data of any dimensionality and thus also needs the number of dimensions (elements in each input array).
  • `gal_box_border_from_center' now accepts an array of coordinates as one argument and the number of dimensions as another. This allows it to work on any dimensionality.
  • `gal_fits_img_info' now also returns the name and units of the dataset (if they aren't NULL). So it takes two extra arguments.
  • `gal_wcs_pixel_scale' now replaces the old `gal_wcs_pixel_scale_deg', since it doesn't only apply to degrees. The pixel scale units are defined by the units of the WCS.
  • `GAL_TILE_PARSE_OPERATE' (only when `OTHER' is given) can now parse and operate on different datasets independent of the size of allocated block of memory (the tile sizes of `IN' and `OTHER' have to be identical, but not their allocated blocks of memory). Until now, it was necessary for the two blocks to have the same size and this is no longer the case.
Bug fixes
  • MakeProfiles long options on 32bit big endian systems (bug #51341).
  • Pure rotation around pixel coordinate (0,0) (bug #51353).
  • NoiseChisel segfault when no usable region for sky clumps (bug #51372).
  • Pixel scale measurement when dimension scale isn't equal or doesn't decrease (bug #51385).
  • Improper types for function code in MakeProfiles (bug #51467).
  • Crashes on 32-bit and big-endian systems (bug #51476).
  • Warp's align matrix when second dimension must be reversed (bug #51536).
  • Reading BZERO for unsigned 64-bit integers (bug #51555).
  • Arithmetic with one file and no operators (bug #51559).
  • NoiseChisel segfault when detection contains no clumps (bug #51906).
Checking integrity

Use a .sig file to verify that the corresponding file (without the .sig suffix) is intact. First, be sure to download both the .sig file and the corresponding tarball. Then, run a command like this:

If that command fails because you don't have the required public key, then run this command to import it:

and rerun the 'gpg --verify' command.

Categories: FLOSS Project Planets

Discovering South America – Qt Con Brazil

Planet KDE - Wed, 2017-09-13 11:52

Few weeks ago I attended QtCon Brasil, an event organised by Brazilian members in the KDE Community who wanted to have an outreach event to the local technology community about Qt and beyond. It was great.

It’s always refreshing to get out of your own circles to meet new people and hear what they are up to. For me, it was more notable than ever! Different culture, different people, different backgrounds, different hemisphere!

We had a variety of presentations. From the mandatory KDE Frameworks talk by Filipe:

Some PyQt experience by Eliakin

And a lot more, although I didn’t understand everything, given my limited knowledge of the language consists of mapping it to Spanish or Catalan.

We got to hear about many projects in the region doing really cool stuff with Qt. From drug research and development to Point of Sale devices.
Us in the Free Software world, we are not always exposed to a good deal of development happening right before us, with the same technologies. It is fundamental to keep having such events where we learn how people create software, even if it’s on close environments.

Myself, I got to present Kirigami. It’s a very important project for KDE and I was happy to introduce it to the audience. My impression is that the presentation was well received, I believe that such wider community sees the value in convergence and portability like we do. Starting to deliver applications useful in a variety of scenarios will bring new light to how we use our computing systems.

Here you can find my slides and the examples I used.


Categories: FLOSS Project Planets

Shirish Agarwal: Android, Android marketplace and gaming addiction.

Planet Debian - Wed, 2017-09-13 10:44

This would be a longish piece so please bear and play with tea, coffee, beer or anything stronger that you desire while reading below

Categories: FLOSS Project Planets
Syndicate content