FLOSS Project Planets

Dries Buytaert: Acquia delivers during Mueller report traffic surge

Planet Drupal - 3 hours 55 min ago

Last month, Special Counsel Robert Mueller's long-awaited report on Russian interference in the U.S. election was released on the Justice.gov website.

With the help of Acquia and Drupal, the report was successfully delivered without interruption, despite a 7,000% increase in traffic on its release date, according to the Ottawa Business Journal.

According to Federal Computer Week, by 5pm on the day of the report's release, there had already been 587 million site visits, with 247 million happening within the first hour.

During these types of high-pressure events when the world is watching, no news is good news. Keeping sites like this up and available to the public is an important part of democracy and the freedom of information. I'm proud of Acquia's and Drupal's ability to deliver when it matters most!

Categories: FLOSS Project Planets

Codementor: It is easier to gather package meta-data from PyPI package ecosystem, once know the right way

Planet Python - 4 hours 26 min ago
It is easier to gather package meta-data from PyPI package ecosystem, once know the right way
Categories: FLOSS Project Planets

Jonathan Wiltshire: RC candidate of the day (3)

Planet Debian - 4 hours 35 min ago

Sometimes the list of release-critical bugs is overwhelming, and it’s hard to find something to tackle.

Bug #929017 includes a patch which needs reviewing and, if it’s appropriate, uploading.

Categories: FLOSS Project Planets

Agaric Collective: How Stewarding the Digital Commons Keeps Your Software Secure, Stable and Innovative

Planet Drupal - 5 hours 41 min ago

We live amidst a Digital Commons - technology that is built with the principles of freedom and transparency baked into its code and design. It's maintained out in the open by the free software community. This commons is invisible to many of us, but the closer we are to the technology we use, the more that it comes into focus.We at Agaric are knee deep in this Digital Commons. Our name Agaric is a nod to the mycelial nature of the open web. We help create, maintain, and promote free and open-source software that make up this commons.

Read more and discuss at agaric.coop.

Categories: FLOSS Project Planets

Data School: Data science best practices with pandas (video tutorial)

Planet Python - 7 hours 22 min ago

The pandas library is a powerful tool for multiple phases of the data science workflow, including data cleaning, visualization, and exploratory data analysis. However, the size and complexity of the pandas library makes it challenging to discover the best way to accomplish any given task.

In this in-depth tutorial, which I presented at PyCon 2019, you'll use pandas to answer questions about a real-world dataset. Through each exercise, you'll learn important data science skills as well as "best practices" for using pandas. By the end of the tutorial, you'll be more fluent at using pandas to correctly and efficiently answer your own data science questions.

This is an intermediate level tutorial, so if you're new to pandas, I recommend starting with my other video series: Easier data analysis with pandas.

If you want to follow along with the exercises at home, you can download the dataset and notebook from GitHub.

Here are some of the topics covered in the video:

  • adjusting for bias in your dataset
  • handling missing values
  • choosing an appropriate plot
  • customizing your plot
  • using the datetime data type
  • filtering using loc versus query
  • using multiple aggregation functions
  • checking for small sample sizes
  • method chaining
  • verifying your results using random samples
  • evaluating a "stringifed" Python container
  • applying a custom function to a Series
  • writing lambda functions

Let me know if you have any questions, and I'm happy to answer them!

P.S. If you like this video, you should check out my interactive pandas course, Analyzing Police Activity with pandas.

Categories: FLOSS Project Planets

Little Trouble in Big Data – Part 1

Planet KDE - 8 hours 31 min ago

A few months ago, we received a phone call from a bioinformatics group at a European university. The problem they were having appeared very simple. They wanted to know how to usemmap() to be able to load a large data set into RAM at once. OK I thought, no problem, I can handle that one. Turns out this has grown into a complex and interesting exercise in profiling and threading.

The background is that they are performing Markov-Chain Monte Carlo simulations by sampling at random from data sets containing SNP (pronounced “snips”) genetic markers for a selection of people. It boils down to a large 2D matrix of floats where each column corresponds to an SNP and each row to a person. They provided some small and medium sized data sets for me to test with, but their full data set consists of 500,000 people with 38 million SNP genetic markers!

The analysis involves selecting a column (SNP) at random in the data set and then performing some computations on the data for all of the individuals and collecting some summary statistics. Do that for all of the columns in the data set, and then repeat for a large number of iterations. This allows you to approximate the underlying true distribution from the discreet data that has been collected.

That’s the 10,000 ft view of the problem, so what was actually involved? Well we undertook a bit of an adventure and learned some interesting stuff along the way, hence this blog series.

The stages we went through were:

  1. Preprocessing
  2. Loading the Data
  3. Fine-grained Threading
  4. Preprocessing Reprise
  5. Coarse Threading

In this blog, I’ll detail stages 1 and 2. The rest of the process will be revealed as the blog series unfolds, and I’ll include a final summary at the end.

1. Preprocessing

The first thing we noticed when looking at the code they already had is that there is quite some work being done when reading in the data for each column. They do some summary statistics on the column, then scale and bias all the data points in that column such that the mean is zero. Bearing in mind that each column will be processed many times, (typically 10k – 1 million), this is wasteful to repeat every time the column is used.

So, reusing some general advice from 3D graphics, we moved this work further up the pipeline to a preprocessing step. The SNP data is actually stored in a compressed form which takes the form of quantizing 4 SNP values into a few bytes which we then decompress when loading. So the preprocessing step does the decompression of SNP data, calculates the summary statistics, adjusts the data and then writes the floats out to disk in the form of a ppbed file (preprocessed bed where bed is a standard format used for this kind of data).

The upside is that we avoid all of this work on every iteration of the Monte Carlo simulation at runtime. The downside is that 1 float per SNP per person adds up to a hell of a lot of data for the larger data sets! In fact, for the full data set it’s just shy of 69 TB of floating point data! But to get things going, we were just worrying about smaller subsets. We will return to this later.

2. Loading the data

Even on moderately sized data sets, loading the entirety of the data set into physical RAM at once is a no-go as it will soon exhaust even the beefiest of machines. They have some 40 core, many-many-GB-of-RAM machine which was still being exhausted. This is where the original enquiry was aimed – how to use mmap(). Turns out it’s pretty easy as you’d expect. It’s just a case of setting the correct flags so that the kernel doesn’t actually take a copy of the data in the file. Namely, PROT_READ and MAP_SHARED:

void Data::mapPreprocessBedFile(const string &preprocessedBedFile) { // Calculate the expected file sizes - cast to size_t so that we don't overflow the unsigned int's // that we would otherwise get as intermediate variables! const size_t ppBedSize = size_t(numInds) * size_t(numIncdSnps) * sizeof(float); // Open and mmap the preprocessed bed file ppBedFd = open(preprocessedBedFile.c_str(), O_RDONLY); if (ppBedFd == -1) throw("Error: Failed to open preprocessed bed file [" + preprocessedBedFile + "]"); ppBedMap = reinterpret_cast<float *>(mmap(nullptr, ppBedSize, PROT_READ, MAP_SHARED, ppBedFd, 0)); if (ppBedMap == MAP_FAILED) throw("Error: Failed to mmap preprocessed bed file"); ... }

When dealing with such large amounts of data, be careful of overflows in temporaries! We had a bug where ppBedSize was overflowing and later causing a segfault.

So, at this point we have a float *ppBed pointing at the start of the huge 2D matrix of floats. That’s all well and good but not very convenient for working with. The code base already made use of Eigen for vector and matrix operations so it would be nice if we could interface with the underlying data using that.

Turns out we can (otherwise I wouldn’t have mentioned it). Eigen provides VectorXf and MatrixXf types for vectors and matrices but these own the underlying data. Luckily Eigen also provides a wrapper around these in the form of Map. Given our pointer to the raw float data which is mmap()‘d, we can use the placement new operator to wrap it up for Eigen like so:

class Data { public: Data(); // mmap related data int ppBedFd; float *ppBedMap; Map<MatrixXf> mappedZ; } void Data::mapPreprocessBedFile(const string &preprocessedBedFile) { ... ppBedMap = reinterpret_cast<float *>(mmap(nullptr, ppBedSize, PROT_READ, MAP_SHARED, ppBedFd, 0)); if (ppBedMap == MAP_FAILED) throw("Error: Failed to mmap preprocessed bed file"); new (&mappedZ) Map<MatrixXf>(ppBedMap, numRows, numCols); }

At this point we can now do operations on the mappedZ matrix and they will operate on the huge data file which will be paged in by the kernel as needed. We never need to write back to this data so we didn’t need the PROT_WRITE flag for mmap.

Yay! Original problem solved and we’ve saved a bunch of work at runtime by preprocessing. But there’s a catch! It’s still slow. See the next blog in the series for how we solved this.

The post Little Trouble in Big Data – Part 1 appeared first on KDAB.

Categories: FLOSS Project Planets

Michael Stapelberg: Optional dependencies don’t work

Planet Debian - 8 hours 41 min ago

In the i3 projects, we have always tried hard to avoid optional dependencies. There are a number of reasons behind it, and as I have recently encountered some of the downsides of optional dependencies firsthand, I summarized my thoughts in this article.

What is a (compile-time) optional dependency?

When building software from source, most programming languages and build systems support conditional compilation: different parts of the source code are compiled based on certain conditions.

An optional dependency is conditional compilation hooked up directly to a knob (e.g. command line flag, configuration file, …), with the effect that the software can now be built without an otherwise required dependency.

Let’s walk through a few issues with optional dependencies.

Inconsistent experience in different environments

Software is usually not built by end users, but by packagers, at least when we are talking about Open Source.

Hence, end users don’t see the knob for the optional dependency, they are just presented with the fait accompli: their version of the software behaves differently than other versions of the same software.

Depending on the kind of software, this situation can be made obvious to the user: for example, if the optional dependency is needed to print documents, the program can produce an appropriate error message when the user tries to print a document.

Sometimes, this isn’t possible: when i3 introduced an optional dependency on cairo and pangocairo, the behavior itself (rendering window titles) worked in all configurations, but non-ASCII characters might break depending on whether i3 was compiled with cairo.

For users, it is frustrating to only discover in conversation that a program has a feature that the user is interested in, but it’s not available on their computer. For support, this situation can be hard to detect, and even harder to resolve to the user’s satisfaction.

Packaging is more complicated

Unfortunately, many build systems don’t stop the build when optional dependencies are not present. Instead, you sometimes end up with a broken build, or, even worse: with a successful build that does not work correctly at runtime.

This means that packagers need to closely examine the build output to know which dependencies to make available. In the best case, there is a summary of available and enabled options, clearly outlining what this build will contain. In the worst case, you need to infer the features from the checks that are done, or work your way through the --help output.

The better alternative is to configure your build system such that it stops when any dependency was not found, and thereby have packagers acknowledge each optional dependency by explicitly disabling the option.

Untested code paths bit rot

Code paths which are not used will inevitably bit rot. If you have optional dependencies, you need to test both the code path without the dependency and the code path with the dependency. It doesn’t matter whether the tests are automated or manual, the test matrix must cover both paths.

Interestingly enough, this principle seems to apply to all kinds of software projects (but it slows down as change slows down): one might think that important Open Source building blocks should have enough users to cover all sorts of configurations.

However, consider this example: building cairo without libxrender results in all GTK application windows, menus, etc. being displayed as empty grey surfaces. Cairo does not fail to build without libxrender, but the code path clearly is broken without libxrender.

Can we do without them?

I’m not saying optional dependencies should never be used. In fact, for bootstrapping, disabling dependencies can save a lot of work and can sometimes allow breaking circular dependencies. For example, in an early bootstrapping stage, binutils can be compiled with --disable-nls to disable internationalization.

However, optional dependencies are broken so often that I conclude they are overused. Read on and see for yourself whether you would rather commit to best practices or not introduce an optional dependency.

Best practices

If you do decide to make dependencies optional, please:

  1. Set up automated testing for all code path combinations.
  2. Fail the build until packagers explicitly pass a --disable flag.
  3. Tell users their version is missing a dependency at runtime, e.g. in --version.
Categories: FLOSS Project Planets

Codementor: Building Machine Learning Data Pipeline using Apache Spark

Planet Python - 9 hours 2 min ago
Apache Spark (.) is increasingly becoming popular in the field of Data Sciences because of its ability to deal with the huge datasets and the capability to run computations in memory which is...
Categories: FLOSS Project Planets

Andy Wingo: bigint shipping in firefox!

GNU Planet! - 9 hours 22 min ago

I am delighted to share with folks the results of a project I have been helping out on for the last few months: implementation of "BigInt" in Firefox, which is finally shipping in Firefox 68 (beta).

what's a bigint?

BigInts are a new kind of JavaScript primitive value, like numbers or strings. A BigInt is a true integer: it can take on the value of any finite integer (subject to some arbitrarily large implementation-defined limits, such as the amount of memory in your machine). This contrasts with JavaScript number values, which have the well-known property of only being able to precisely represent integers between -253 and 253.

BigInts are written like "normal" integers, but with an n suffix:

var a = 1n; var b = a + 42n; b << 64n // result: 793209995169510719488n

With the bigint proposal, the usual mathematical operations (+, -, *, /, %, <<, >>, **, and the comparison operators) are extended to operate on bigint values. As a new kind of primitive value, bigint values have their own typeof:

typeof 1n // result: 'bigint'

Besides allowing for more kinds of math to be easily and efficiently expressed, BigInt also allows for better interoperability with systems that use 64-bit numbers, such as "inodes" in file systems, WebAssembly i64 values, high-precision timers, and so on.

You can read more about the BigInt feature over on MDN, as usual. You might also like this short article on BigInt basics that V8 engineer Mathias Bynens wrote when Chrome shipped support for BigInt last year. There is an accompanying language implementation article as well, for those of y'all that enjoy the nitties and the gritties.

can i ship it?

To try out BigInt in Firefox, simply download a copy of Firefox Beta. This version of Firefox will be fully released to the public in a few weeks, on July 9th. If you're reading this in the future, I'm talking about Firefox 68.

BigInt is also shipping already in V8 and Chrome, and my colleague Caio Lima has an project in progress to implement it in JavaScriptCore / WebKit / Safari. Depending on your target audience, BigInt might be deployable already!

thanks

I must mention that my role in the BigInt work was relatively small; my Igalia colleague Robin Templeton did the bulk of the BigInt implementation work in Firefox, so large ups to them. Hearty thanks also to Mozilla's Jan de Mooij and Jeff Walden for their patient and detailed code reviews.

Thanks as well to the V8 engineers for their open source implementation of BigInt fundamental algorithms, as we used many of them in Firefox.

Finally, I need to make one big thank-you, and I hope that you will join me in expressing it. The road to ship anything in a web browser is long; besides the "simple matter of programming" that it is to implement a feature, you need a specification with buy-in from implementors and web standards people, you need a good working relationship with a browser vendor, you need willing technical reviewers, you need to follow up on the inevitable security bugs that any browser change causes, and all of this takes time. It's all predicated on having the backing of an organization that's foresighted enough to invest in this kind of long-term, high-reward platform engineering.

In that regard I think all people that work on the web platform should send a big shout-out to Tech at Bloomberg for making BigInt possible by underwriting all of Igalia's work in this area. Thank you, Bloomberg, and happy hacking!

Categories: FLOSS Project Planets

EuroPython: EuroPython 2019: Monday and Tuesday activities for main conference attendees

Planet Python - 13 hours 34 min ago

Although the main conference starts on Wednesday, July 10th, there’s already so much to do for attendees with the main conference ticket on Monday 8th and Tuesday 9th.

Beginners’ Day and Sponsored Trainings

You can come to the workshops and trainings venue at FHNW Campus Muttenz and:

  • pick up your conference badge
  • attend the Beginners’ Day workshop
  • attend the sponsored trainings

If you want to attend other workshops and trainings, you’ll need a separate training ticket or combined ticket.

Details on the Beginners’ Day workshop and the sponsored trainings will be announced separately.

Catering on training days not included

Since we have to budget carefully, the lunch and coffee breaks are not included, if you don’t have a training ticket or combined ticket.

To not keep you hungry, we have arranged that you can buy lunch coupons (price to be announced later). You can also go to the grocery store at the ground floor. For coffee breaks you can go to the ground floor, to the 12th floor of the FHNW building, or outside at the beach bar (nice weather only) and buy drinks.

Enjoy,

EuroPython 2019 Team
https://ep2019.europython.eu/
https://www.europython-society.org/

Categories: FLOSS Project Planets

Test and Code: 75: Modern Testing Principles - Alan Page

Planet Python - 14 hours 36 min ago

Software testing, if done right, is done all the time, throughout the whole life of a software project. This is different than the verification and validation of a classical model of QA teams. It's more of a collaborative model that actually tries to help get great software out the door faster and iterate quicker.

One of the people at the forefront of this push is Alan Page. Alan and his podcast cohost Brent Jensen tried to boil down what modern testing looks like in the Modern Testing Principles.

I've got Alan here today, to talk about the principles, and also to talk about this transition from classical QA to testing specialists being embedded in software teams and then to software teams doing their own testing.

But that only barely scratches the surface of what we cover. I think you'll learn a lot from this discussion.

The seven principles of Modern Testing:

  1. Our priority is improving the business.
  2. We accelerate the team, and use models like Lean Thinking and the Theory of Constraints to help identify, prioritize and mitigate bottlenecks from the system.
  3. We are a force for continuous improvement, helping the team adapt and optimize in order to succeed, rather than providing a safety net to catch failures.
  4. We care deeply about the quality culture of our team, and we coach, lead, and nurture the team towards a more mature quality culture.
  5. We believe that the customer is the only one capable to judge and evaluate the quality of our product
  6. We use data extensively to deeply understand customer usage and then close the gaps between product hypotheses and business impact.
  7. We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.

Special Guest: Alan Page.

Sponsored By:

Support Test & Code - Software Testing, Development, Python

Links:

<p>Software testing, if done right, is done all the time, throughout the whole life of a software project. This is different than the verification and validation of a classical model of QA teams. It&#39;s more of a collaborative model that actually tries to help get great software out the door faster and iterate quicker. </p> <p>One of the people at the forefront of this push is Alan Page. Alan and his podcast cohost Brent Jensen tried to boil down what modern testing looks like in the Modern Testing Principles.</p> <p>I&#39;ve got Alan here today, to talk about the principles, and also to talk about this transition from classical QA to testing specialists being embedded in software teams and then to software teams doing their own testing.</p> <p>But that only barely scratches the surface of what we cover. I think you&#39;ll learn a lot from this discussion.</p> <p><strong>The seven principles of <a href="http://moderntesting.org" rel="nofollow">Modern Testing</a>:</strong> </p> <ol> <li>Our priority is improving the business.</li> <li>We accelerate the team, and use models like Lean Thinking and the Theory of Constraints to help identify, prioritize and mitigate bottlenecks from the system.</li> <li>We are a force for continuous improvement, helping the team adapt and optimize in order to succeed, rather than providing a safety net to catch failures.</li> <li>We care deeply about the quality culture of our team, and we coach, lead, and nurture the team towards a more mature quality culture.</li> <li>We believe that the customer is the only one capable to judge and evaluate the quality of our product</li> <li>We use data extensively to deeply understand customer usage and then close the gaps between product hypotheses and business impact.</li> <li>We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist. </li> </ol><p>Special Guest: Alan Page.</p><p>Sponsored By:</p><ul><li><a href="https://www.patreon.com/testpodcast" rel="nofollow">Patreon Supporters</a>: <a href="https://www.patreon.com/testpodcast" rel="nofollow">Help support the show with as little as $1 per month. Funds help pay for expenses associated with the show.</a></li></ul><p><a href="https://www.patreon.com/testpodcast" rel="payment">Support Test & Code - Software Testing, Development, Python</a></p><p>Links:</p><ul><li><a href="https://angryweasel.com/blog/" title="Tooth of the Weasel – notes and rants about software and software quality" rel="nofollow">Tooth of the Weasel – notes and rants about software and software quality</a></li><li><a href="https://www.angryweasel.com/ABTesting/" title="AB Testing – Alan and Brent talk about Modern Testing – including Agile, Data, Leadership, and more." rel="nofollow">AB Testing – Alan and Brent talk about Modern Testing – including Agile, Data, Leadership, and more.</a></li><li><a href="https://www.angryweasel.com/ABTesting/modern-testing-principles/" title="Modern Testing Principles" rel="nofollow">Modern Testing Principles</a></li><li><a href="https://amzn.to/2WigLM2" title="The Lean Startup" rel="nofollow">The Lean Startup</a></li></ul>
Categories: FLOSS Project Planets

Catalin George Festila: Python 3.7.3 : Using the pelican python module and GitHub.

Planet Python - 15 hours 37 min ago
This tutorial follows a similar tutorial from the web. I tested that tutorial to see if it works. This tutorial is focused on GitHub but can also be used independently on another python platform. You need a GitHub Pro, your personal account: With GitHub Pro, your personal account gets unlimited public and private repositories with unlimited collaborators. In addition to the features available
Categories: FLOSS Project Planets

François Marier: Installing Ubuntu 18.04 using both full-disk encryption and RAID1

Planet Debian - 17 hours 6 min ago

I recently setup a desktop computer with two SSDs using a software RAID1 and full-disk encryption (i.e. LUKS). Since this is not a supported configuration in Ubuntu desktop, I had to use the server installation medium.

This is my version of these excellent instructions.

Server installer

Start by downloading the alternate server installer and verifying its signature:

  1. Download the required files:

    wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/ubuntu-18.04.2-server-amd64.iso wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS.gpg
  2. Verify the signature on the hash file:

    $ gpg --keyid-format long --keyserver hkps://keyserver.ubuntu.com --recv-keys 0xD94AA3F0EFE21092 $ gpg --verify SHA256SUMS.gpg SHA256SUMS gpg: Signature made Fri Feb 15 08:32:38 2019 PST gpg: using RSA key D94AA3F0EFE21092 gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>" [undefined] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092
  3. Verify the hash of the ISO file:

    $ sha256sum ubuntu-18.04.2-server-amd64.iso a2cb36dc010d98ad9253ea5ad5a07fd6b409e3412c48f1860536970b073c98f5 ubuntu-18.04.2-server-amd64.iso $ grep ubuntu-18.04.2-server-amd64.iso SHA256SUMS a2cb36dc010d98ad9253ea5ad5a07fd6b409e3412c48f1860536970b073c98f5 *ubuntu-18.04.2-server-amd64.iso

Then copy it to a USB drive:

dd if=ubuntu-18.04.2-server-amd64.iso of=/dev/sdX

and boot with it.

Inside the installer, use manual partitioning to:

  1. Configure the physical partitions.
  2. Configure the RAID array second.
  3. Configure the encrypted partitions last

Here's the exact configuration I used:

  • /dev/sda1 is 512 MB and used as the EFI parition
  • /dev/sdb1 is 512 MB but not used for anything
  • /dev/sda2 and /dev/sdb2 are both 4 GB (RAID)
  • /dev/sda3 and /dev/sdb3 are both 512 MB (RAID)
  • /dev/sda4 and /dev/sdb4 use up the rest of the disk (RAID)

I only set /dev/sda2 as the EFI partition because I found that adding a second EFI partition would break the installer.

I created the following RAID1 arrays:

  • /dev/sda2 and /dev/sdb2 for /dev/md2
  • /dev/sda3 and /dev/sdb3 for /dev/md0
  • /dev/sda4 and /dev/sdb4 for /dev/md1

I used /dev/md0 as my unencrypted /boot partition.

Then I created the following LUKS partitions:

  • md1_crypt as the / partition using /dev/md1
  • md2_crypt as the swap partition (4 GB) with a random encryption key using /dev/md2
Post-installation configuration

Once your new system is up, sync the EFI partitions using DD:

dd if=/dev/sda1 of=/dev/sdb1

and create a second EFI boot entry:

efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi

Ensure that the RAID drives are fully sync'ed by keeping an eye on /prod/mdstat and then reboot, selecting "ubuntu2" in the UEFI/BIOS menu.

Once you have rebooted, remove the following package to speed up future boots:

apt purge btrfs-progs

To switch to the desktop variant of Ubuntu, install these meta-packages:

apt install ubuntu-desktop gnome

then use debfoster to remove unnecessary packages (in particular the ones that only come with the default Ubuntu server installation).

Fixing booting with degraded RAID arrays

Since I have run into RAID startup problems in the past, I expected having to fix up a few things to make degraded RAID arrays boot correctly.

I did not use LVM since I didn't really feel the need to add yet another layer of abstraction of top of my setup, but I found that the lvm2 package must still be installed:

apt install lvm2

with use_lvmetad = 0 in /etc/lvm/lvm.conf.

Then in order to automatically bring up the RAID arrays with 1 out of 2 drives, I added the following script in /etc/initramfs-tools/scripts/local-top/cryptraid:

#!/bin/sh PREREQ="mdadm" prereqs() { echo "$PREREQ" } case $1 in prereqs) prereqs exit 0 ;; esac mdadm --run /dev/md0 mdadm --run /dev/md1 mdadm --run /dev/md2

before making that script executable:

chmod +x /etc/initramfs-tools/scripts/local-top/cryptraid

and refreshing the initramfs:

update-initramfs -u -k all Disable suspend-to-disk

Since I use a random encryption key for the swap partition (to avoid having a second password prompt at boot time), it means that suspend-to-disk is not going to work and so I disabled it by putting the following in /etc/initramfs-tools/conf.d/resume:

RESUME=none

and by adding noresume to the GRUB_CMDLINE_LINUX variable in /etc/default/grub before applying these changes:

update-grub update-initramfs -u -k all Test your configuration

With all of this in place, you should be able to do a final test of your setup:

  1. Shutdown the computer and unplug the second drive.
  2. Boot with only the first drive.
  3. Shutdown the computer and plug the second drive back in.
  4. Boot with both drives and re-add the second drive to the RAID array:

    mdadm /dev/md0 -a /dev/sdb3 mdadm /dev/md1 -a /dev/sdb4 mdadm /dev/md2 -a /dev/sdb2
  5. Wait until the RAID is done re-syncing and shutdown the computer.

  6. Repeat steps 2-5 with the first drive unplugged instead of the second.
  7. Reboot with both drives plugged in.

At this point, you have a working setup that will gracefully degrade to a one-drive RAID array should one of your drives fail.

Categories: FLOSS Project Planets

Web Wash: Using Pattern Trigger (Regex) in Webform Conditional Logic in Drupal 8

Planet Drupal - Wed, 2019-05-22 22:15

When you need to create survey style forms in Drupal 8 Webform is the clear winner. It's powerful enough to create all sorts of forms and you can even give it to your editor so they can create their own, after a little training, of course.

One part of Webform which I like is the ability to define conditional logic. For example, you can show or hide a text field based off a value from another element. You can also make an element conditionally required. It's a very useful part of Webform, and you do all of this through a UI, no custom code.

Defining simple conditional logic, check if element value has a single value, is pretty straightforward. But when you have to deal with multiple values, this is where things get tricky.

Categories: FLOSS Project Planets

Wingware Blog: Remote Development with Wing Pro

Planet Python - Wed, 2019-05-22 21:00

In this issue of Wing Tips we take a quick look at Wing Pro's remote development capabilities.

Setting up SSH Access

Wing Pro's remote development support requires using an SSH public/private key pair and SSH agent rather than entering a password each time you connect. This is more secure and convenient, and it allows Wing to seamlessly re-establish the remote connection as needed over time. If you currently enter a password (other than a 2-factor authentication card selector) each time you ssh to the remote host, then please see SSH Setup Details for instructions.

Creating a Remote Project

To create a Wing Pro project that works with a remote host, select New Project from the Project menu and use Connect to Remote Host via SSH as the project type. You will need to choose an Identifier` that Wing uses to refer to the host, and enter the Host Name either as an IP address or name (optionally in username@hostname form).

If python is not on the PATH on your remote host, or is not the Python you want to use, then you will also need to paste the full path to Python into the Python Executable field. This is typically the value printed by import sys; print(sys.executable) in the Python that you want to use.

You will only rarely need to specify any of the other values in a remote host configuration. For now, leave them set to their default values. For example:

Once you submit this dialog, Wing will probe the remote host and install the remote agent. When this is complete, you should see a dialog with some information about the remote host:

You can now point Wing to your remotely stored source code with Add Existing Directory in the Project menu. When this is complete, save the project to local disk with Save Project in the Project menu.

That's all there is to it!

At this point all of Wing's features should work with files on the remote host, including editing, debugging, unit testing, working in the Python Shell (after restarting from its Options menu), using version control, searching, code navigation, running processes from the OS Commands tool, and so forth.

How it Works

Using New Project takes care of a few steps that can also be done manually, in the event that a project or remote host connection needs to be reconfigured:

(1) Your remote host configuration can be viewed and edited from Remote Hosts in the Project menu. From here, remote host configurations can also be marked as shared, so they can be reused by multiple projects or used to open projects that are stored on the remote host.

(2) Pointing a project at a remote host is done by changing the Python Executable under the Environment tab in Project Properties to Remote and selecting the remote host configuration. The remote host configuration determines which Python is used on the remote host.

Further Reading

If your code runs in a web server or other framework on the remote host, you will need to initiate debug using Wing's wingdbstub.py module as described in the section Initiating Debug in Remote Web Development, or check out the How-Tos for details on using Wing Pro with specific frameworks and tools.

For detailed documentation on Wing Pro's remote development capabilities see Remote Development in the product manual.

Getting Help

As always, don't hesitate to email support@wingware.com or post on the Q&A Forum for help!



That's it for now! We'll be back next week with more Wing Tips for Wing Python IDE.

Categories: FLOSS Project Planets

Charles Plessy: Register your media types to the IANA !

Planet Debian - Wed, 2019-05-22 18:19

As the maintainer of the mime-support in Debian, I would like to give Kudos to Petter Reinholdtsen, who just opened a ticket at the IANA to create a text/vnd.sosi media type. May his example be followed by others!

Categories: FLOSS Project Planets

Pages