FLOSS Project Planets

Aaron Morton: TLP Dashboards for Datadog users, out of the box.

Planet Apache - Mon, 2017-12-04 19:00

We had the pleasure to release our monitoring dashboards designed for Apache Cassandra on Datadog last week. It is a nice occasion to share our thoughts around Cassandra Dashboards design as it is a recurrent question in the community.

We wrote a post about this on the Datadog website here.

For people using Datadog we hope this will give more details on how the dashboards were designed, thus on how to use the dashboards we provided. For others, we hope this information will be useful in the process of building and then using your own dashboards, with the technology of your choice.

The Project

Building an efficient, complete, and readable set of dashboards to monitor Apache Cassandra is time consuming and far from being straightforward.

Those who tried it probably noticed it requires a fair amount of time and knowledge with both the monitoring technology in use (Datadog, Grafana, Graphite or InfluxDB, metrics-reporter, etc) and of Apache Cassandra. Creating dashboards is about picking the most relevant metrics, aggregations, units, chart type and then gather them in a way that this huge amount of data actually provides usable information. Dashboards need to be readable, understandable and easy to use for the final operator.

On one hand, creating comprehensive dashboards is a long and complex task. On the other hand, every Apache Cassandra cluster can be monitored roughly the same way. Most production issues can be detected and analyzed using a common set of charts, organized the same way, for all the Apache Cassandra clusters. Each cluster may require additional operator specific dashboards or charts depending on workload and merging of metrics outside of Cassandra, but those would supplement the standard dashboards, not replace them. There are some differences depending on the Apache Cassandra versions in use, but they are relatively minor and not subject to rapid change.

In my monitoring presentation at the 2016 Cassandra Summit I announced that we were working on this project.

In December 2017 it was release for Datadog users. If you want to get started with these dashboards and you are using Datadog, see how to do this documentation on Datadog integration for Cassandra.

Dashboard Design Our Approach to Monitoring

The dashboards have been designed to allow the operator to do the following:

  1. Easily detect any anomaly (Overview Dashboard)
  2. Be able to efficiently troubleshoot and fix the anomaly (Themed Dashboards)
  3. Find the bottlenecks and optimize the overall performance (Themed Dashboards)

The 2 later points above can be seen as the same kind of operations which can be supported by the same set of dashboards.

Empowering the operator

We strongly believe that showing the metrics to the operator can be a nice entry point for learning about Cassandra. Each of the themed dashboards monitor a distinct internal processes of Cassandra. Most of the metrics related to this internal process are then grouped up within a Dashboard. We think it makes it easier for the operator to understand Cassandra’s internal processes.

To make it clearer, let’s consider the example of someone completely new to Cassandra. On first repair, the operator starts an incremental repair without knowing anything about it and latencies increase substantially after a short while. Classic.

The operator would notice a read latency in the ‘Overview Dashboard’, then aim at the ‘Read Path Dashboard’. There the operator would be able to notice that the number of SSTables went from 50 to 800 on each node, or for a table. If the chart is there out of the box, even if not knowing what an SSTable is the operator can understand something changed there and that it relates to the outage somehow. The operator would then search in the right direction, probably solving the issue quickly, and possibly learning in the process.

What to Monitor: Dashboards and Charts Detail

Here we will be really focusing on charts details and indications on how to use each chart efficiently. While this post is a discussion of dashboards available for DataDog, the metrics can be visualized using any tool, and we believe this would be a good starting point when setting up monitoring for Cassandra.

In the graphs, the values and percentiles chosen are sometime quite arbitrary and often depend on the use case or Cassandra setup. The point is to give a reference, a starting point on what could be ‘normal’ or ‘wrong’ values. The Apache Cassandra monitoring documentation, the mailing list archive, or #cassandra on #freenode (IRC) are good ways to answer questions that might pop while using dashboards.

Some dashboards are voluntary duplicated across dashboards or within a dashboard, but with distinct visualisation or aggregation.

Detect anomalies: Overview Dashboard

We don’t try to troubleshoot at this stage. We want to detect outages that might impact the service or check that the Cassandra cluster is globally healthy. To accomplish this, this Overview Dashboard aims at both being complete and minimalist.

Complete as we want to be warned anytime “something is happening“ in the Cassandra cluster. Minimalist because we don’t want to miss an important information here because of the flood of non-critical or too low level informations. These charts aim answer the question: “Is Cassandra healthy?”.

Troubleshoot issues and optimize Apache Cassandra: Themed dashboards

The goal here is to divide the information into smaller, more meaningful chunks. When having an issue, it will often only affect one of the subsystems of Cassandra, so the operator can have all the needed information in one place when working on a specific issue, without having irrelevant informations (for this specific issue) hiding more important information.

For this reason these dashboards must maximize the information on a specific theme or internal process of Cassandra and show all the low level information (per table, per host). We are often repeating charts from other dashboards, so we always find the information we need as Cassandra users. This is the contrary to the overview dashboard needs mentioned above that just shows “high level” information.

Read Path Dashboard

In this dashboard we are concerned about any element that could impact a high level client read. In fact, we want to know about everything that could affect the read path in Cassandra by just looking at this dashboard.

Write Path Dashboard

This dashboard focuses on a comprehensive view of the various metrics which affect write latency and throughput. Long garbage collection pause times will always result in dips in throughput and spikes in latency, so it is featured prominently on this dashboard.

SSTable management Dashboard

This dashboard is about getting a comprehensive view of the various metrics which impact the asynchronous steps the data goes through after a write, from the flush to the data deletion with all the compaction processes in between. Here we will be willing to be aware of disk space evolution and make sure asynchronous management of SSTables is happening efficiently or as expected.

Alerting, Automated Anomaly Detection.

To conclude, when happy with monitoring dashboards, it is a good idea to add some alerting rules.

It is important to detect all the anomalies as quickly as possible. To bring monitoring to the next level of efficiency, it is good to be warned automatically when something goes wrong.

We believe adding alerts on each of the “Overview Dashboard” metrics will be sufficient to detect most issues and any major outage, or at least be a good starting point. For each metric, the alerting threshold should be high enough not to trigger false alerts to ensure a mitigating action can be taken. Some alerts should use absolute value (Disk space available, CPU, etc), while others will require relative values. Manually tuning some alerts will be required based on configuration and workload, such as alerting on the latencies.

The biggest risk on alerting is probably to be flooded by false alerts as the natural inclination to start ignoring them, which leads to missing valid ones. As a global guideline, any alert should trigger an action, if it does not, this alert is relatively useless and adds noise.

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-12-04

Planet Apache - Mon, 2017-12-04 18:58
  • Bella Caledonia: A Wake-Up Call

    Swathes of the British elite appeared ignorant of much of Irish history and the country’s present reality. They seemed to have missed that Ireland’s economic dependence on exports to its neighbour came speedily to an end after both joined the European Economic Community in 1973. They seemed unacquainted with Ireland’s modern reality as a confident, wealthy, and internationally-oriented nation with overwhelming popular support for EU membership. Repeated descriptions of the border as a “surprise” obstacle to talks betrayed that Britain had apparently not listened, or had dismissed, the Irish government’s insistence in tandem with the rest of the EU since April that no Brexit deal could be agreed that would harden the border between Ireland and Northern Ireland. The British government failed to listen to Ireland throughout history, and it was failing to listen still.

    (tags: europe ireland brexit uk ukip eu northern-ireland border history)

  • AWS re:invent 2017: Advanced Design Patterns for Amazon DynamoDB (DAT403) – YouTube

    Video of one of the more interesting sessions from this year’s Re:invent

    (tags: reinvent aws dynamodb videos tutorials coding)

  • AWS re:invent 2017: Container Networking Deep Dive with Amazon ECS (CON401) // Practical Applications

    Another re:Invent highlight to watch — ECS’ new native container networking model explained

    (tags: reinvent aws containers docker ecs networking sdn ops)

  • VLC in European Parliament’s bug bounty program

    This was not something I expected:

    The European Parliament has approved budget to improve the EU’s IT infrastructure by extending the free software security audit programme (FOSSA) and by including a bug bounty approach in the programme. The Commission intends to conduct a small-scale “bug bounty” activity on open-source software with companies already operating in the market. The scope of this action is to: Run a small-scale “bug bounty” activity for open source software project or library for a period of up to two months maximum; The purpose of the procedure is to provide the European institutions with open source software projects or libraries that have been properly screened for potential vulnerabilities; The process must be fully open to all potential bug hunters, while staying in-line with the existing Terms of Service of the bug bounty platform.

    (tags: vlc bug-bounties security europe europarl eu ep bugs oss video open-source)

Categories: FLOSS Project Planets

PreviousNext: Using ES6 in your Drupal Components

Planet Drupal - Mon, 2017-12-04 18:22

With the release of Drupal 8.4.x and its use of ES6 (Ecmascript 2015) in Drupal core we’ve started the task of updating our jQuery plugins/widgets to use the new syntax. This post will cover what we’ve learnt so far and what the benefits are of doing this.

by Rikki Bochow / 5 December 2017

If you’ve read my post about the Asset Library system you’ll know we’re big fans of the Component-Driven Design approach, and having a javascript file per component (where needed of course) is ideal. We also like to keep our JS widgets generic so that the entire component (entire styleguide for that matter) can be used outside of Drupal as well. Drupal behaviours and settings are still used but live in a different javascript file to the generic widget, and simply call it’s function, passing in Drupal settings as “options” as required.

Here is an example with an ES5 jQuery header component, with a breakpoint value set somewhere in Drupal:

@file header.js (function ($) { // Overridable defaults $.fn.header.defaults = { breakpoint: 700, toggleClass: 'header__toggle', toggleClassActive: 'is-active' }; $.fn.header = function (options) { var opts = $.extend({}, $.fn.header.defaults, options); return this.each(function () { var $header = $(this); // do stuff with $header } })(jQuery); @file header.drupal.js (function ($, Drupal, drupalSettings) { Drupal.behaviors.header = { attach: function (context) { $('.header', context).header({ breakpoint: drupalSettings.my_theme.header.breakpoint }); } }; })(jQuery, Drupal, drupalSettings);

Converting these files into a different language is relatively simple as you can do one at a time and slowly chip away at the full set. Since ES6 is used in the popular JS frameworks it’s a good starting point for slowly moving towards a “progressively decoupled” front-end.

Support for ES6

Before going too far I should mention support for this syntax isn’t quite widespread enough yet! No fear though, we just need to add a “transpiler” into our build tools. We use Babel, with the babel-preset-env, which will convert our JS for us back into ES5 so that the required older browsers can still understand it.

Our Gulp setup will transpile any .es6.js file and rename it (so we’re not replacing our working file), before passing the renamed file into out minifying Gulp task.

With the Babel ENV preset we can specify which browsers we actually need to support, so that we’re doing the absolute minimum transpilation (is that a word?) and keeping the output as small as possible. There’s no need to bloat your JS trying to support browsers you don’t need to!

import gulp from 'gulp'; import babel from 'gulp-babel'; import path from 'path'; import config from './config'; // Helper function for renaming files const bundleName = (file) => { file.dirname = file.dirname.replace(/\/src$/, ''); file.basename = file.basename.replace('.es6', ''); file.extname = '.bundle.js'; return file; }; const transpileFiles = [ `${config.js.src}/**/*.js`, `${config.js.modules}/**/*.js`, // Ignore already minified files. `!${config.js.src}/**/*.min.js`, `!${config.js.modules}/**/*.min.js`, // Ignore bundle files, so we don’t transpile them twice (will make more sense later) `!${config.js.src}/**/src/*.js`, `!${config.js.modules}/**/src/*.js`, `!${config.js.src}/**/*.bundle.js`, `!${config.js.modules}/**/*.bundle.js`, ]; const transpile = () => ( gulp.src(transpileFiles, { base: './' }) .pipe(babel({ presets: [['env', { modules: false, useBuiltIns: true, targets: { browsers: ["last 2 versions", "> 1%"] }, }]], })) .pipe(rename(file => (bundleName(file)))) .pipe(gulp.dest('./')) ); transpile.description = 'Transpile javascript.'; gulp.task('scripts:transpile', transpile);

Which uses:

$ yarn add path gulp gulp-babel babel-preset-env --dev

On a side note, we’ll be outsourcing our entire Gulp workflow real soon. We’re just working through a few extra use cases for it, so keep an eye out!

Learning ES6

Reading about ES6 is one thing but I find getting into the code to be the best way for me to learn things. We like to follow Drupal coding standards so point our eslint config to extend what’s in Drupal core. Upgrading to 8.4.x obviously threw a LOT of new lint errors, and was usually disabled until time permitted their correction. But you can use these errors as a tailored ES6 guide. Tailored because it’s directly applicable to how you usually write JS (assuming you wrote the first code).

Working through each error, looking up the description, correcting it manually (as opposed to using the --fix flag) was a great way to learn it. It took some time, but once you understand a rule you can start skipping it, then use the --fix flag at the end for a bulk correction.

Of course you're also a Google away from a tonne of online resources and videos to help you learn if you prefer that approach!

ES6 with jQuery

Our original code is usually in jQuery, and I didn’t want to add removing jQuery into the refactor work, so currently we’re using both which works fine. Removing it from the mix entirely will be a future task.

The biggest gotcha was probably our use of this, once converted to arrow functions needed to be reviewed. Taking our header example from above:

return this.each(function () { var $header = $(this); }

Once converted into an arrow function, using this inside the loop is no longer scoped to the function. It doesn’t change at all - it’s not an individual element of the loop anymore, it’s still the same object we’re looping through. So clearly stating the obj as an argument of the .each() function lets us access the individual element again.

return this.each((i, obj) => { const $header = $(obj); }

Converting the jQuery plugins (or jQuery UI widgets) to ES6 modules was a relatively easy task as well… instead of:

(function ($) { // Overridable defaults $.fn.header.defaults = { breakpoint: 700, toggleClass: 'header__toggle', toggleClassActive: 'is-active' }; $.fn.header = function (options) { var opts = $.extend({}, $.fn.header.defaults, options); return this.each(function () { var $header = $(this); // do stuff with $header } })(jQuery);

We just make it a normal-ish function:

const headerDefaults = { breakpoint: 700, toggleClass: 'header__toggle', toggleClassActive: 'is-active' }; function header(options) { (($, this) => { const opts = $.extend({}, headerDefaults, options); return $(this).each((i, obj) => { const $header = $(obj); // do stuff with $header }); })(jQuery, this); } export { header as myHeader }

Since the exported ES6 module has to be a top level function, the jQuery wrapper was moved inside it, along with passing through the this object. There might be a nicer way to do this but I haven't worked it out yet! Everything inside the module is the same as I had in the jQuery plugin, just updated to the new syntax.

I also like to rename my modules when I export them so they’re name-spaced based on the project, which helps when using a mix of custom and vendor scripts. But that’s entirely optional.

Now that we have our generic JS using ES6 modules it’s even easier to share and reuse them. Remember our Drupal JS separation? We no longer need to load both files into our theme. We can import our ES6 module into our .drupal.js file then attach it as a Drupal behaviour. 

@file header.drupal.js import { myHeader } from './header'; (($, { behaviors }, { my_theme }) => { behaviors.header = { attach(context) { myHeader.call($('.header', context), { breakpoint: my_theme.header.breakpoint }); } }; })(jQuery, Drupal, drupalSettings);

So a few differences here, we're importing the myHeader function from our other file,  we're destructuring our Drupal and drupalSettings arguments to simplify them, and using .call() on the function to pass in the object before setting its arguments. Now the header.drupal.js file is the only file we need to tell Drupal about.

Some other nice additions in ES6 that have less to do with jQuery are template literals (being able to say $(`.${opts.toggleClass}`) instead of $('.' + opts.toggleClass')) and the more obvious use of const and let instead of var , which are block-scoped.

Importing modules into different files requires an extra step in our build tools, though. Because browser support for ES6 modules is also a bit too low, we need to “bundle” the modules together into one file. The most popular bundler available is Webpack, so let’s look at that first.

Bundling with Webpack

Webpack is super powerful and was my first choice when I reached this step. But it’s not really designed for this component based approach. Few of them are truly... Bundlers are great for taking one entry JS file which has multiple ES6 modules imported into it. Those modules might be broken down into smaller ES6 modules and at some level are components much like ours, but ultimately they end up being bundled into ONE file.

But that’s not what I wanted! What I wanted, as it turned out, wasn’t very common. I wanted to add Webpack into my Gulp tasks much like our Sass compilation is, taking a “glob” of JS files from various folders (which I don’t really want to have to list), then to create a .bundle.js file for EACH component which included any ES6 modules I used in those components.

The each part was the real clincher. Getting multiple entry points into Webpack is one thing, but multiple destination points as well was certainly a challenge. The vinyl-named npm module was a lifesaver. This is what my Gulp talk looked like:

import gulp from 'gulp'; import gulp-webpack from 'webpack-stream'; import webpack from 'webpack'; // Use newer webpack than webpack-stream import named from 'vinyl-named'; import path from 'path'; import config from './config'; const bundleFiles = [ config.js.src + '/**/src/*.js', config.js.modules + '/**/src/*.js', ]; const bundle = () => ( gulp.src(bundleFiles, { base: "./" }) // Define [name] with the path, via vinyl-named. .pipe(named((file) => { const thisFile = bundleName(file); // Reuse our naming helper function // Set named value and queue. thisFile.named = thisFile.basename; this.queue(thisFile); })) // Run through webpack with the babel loader for transpiling to ES5. .pipe(gulp-webpack({ output: { filename: '[name].bundle.js', // Filename includes path to keep directories }, module: { loaders: [{ test: /\.js$/, exclude: /node_modules/, loader: 'babel-loader', query: {   presets: [['env', {   modules: false,   useBuiltIns: true,   targets: { browsers: ["last 2 versions", "> 1%"] },   }]],   }, }], }, }, webpack)) .pipe(gulp.dest('./')) // Output each [name].bundle.js file next to it’s source ); bundle.description = 'Bundle ES6 modules.'; gulp.task('scripts:bundle', bundle);

Which required:

$ yarn add path webpack webpack-stream babel-loader babel-preset-env vinyl-named --dev

This worked. But Webpack has some boilerplate JS that it adds to its bundle output file, which it needs for module wrapping etc. This is totally fine when the output is a single file, but adding this (exact same) overhead to each of our component JS files, it starts to add up. Especially when we have multiple component JS files loading on the same page, duplicating that code.

It only made each component a couple of KB bigger (once minified, an unminified Webpack bundle is much bigger), but the site seemed so much slower. And it wasn’t just us, a whole bunch of our javascript tests started failing because the timeouts we’d set weren’t being met. Comparing the page speed to the non-webpack version showed a definite impact on performance.

So what are the alternatives? Browserify is probably the second most popular but didn’t have the same ES6 module import support. Rollup.js is kind of the new bundler on the block and was recommended to me as a possible solution. Looking into it, it did indeed sound like the lean bundler I needed. So I jumped ship!

Bundling with Rollup.js

The setup was very similar so it wasn’t hard to switch over. It had a similar problem about single entry/destination points but it was much easier to resolve with the ‘gulp-rollup-each’ npm module. My Gulp task now looks like:

import gulp from 'gulp'; import rollup from 'gulp-rollup-each'; import babel from 'rollup-plugin-babel'; import resolve from 'rollup-plugin-node-resolve'; import commonjs from 'rollup-plugin-commonjs'; import path from 'path'; import config from './config'; const bundleFiles = [ config.js.src + '/**/src/*.js', config.js.modules + '/**/src/*.js', ]; const bundle = () => { return gulp.src(bundleFiles, { base: "./" }) .pipe(rollup({ plugins: [ resolve(), commonjs(), babel({ presets: [['env', { modules: false, useBuiltIns: true, targets: { browsers: ["last 2 versions", "> 1%"] }, }]], babelrc: false, plugins: ['external-helpers'], }) ] }, (file) => { const thisFile = bundleName(file); // Reuse our naming helper function return { format: 'umd', name: path.basename(thisFile.path), }; })) .pipe(gulp.dest('./')); // Output each [name].bundle.js file next to it’s source }; bundle.description = 'Bundle ES6 modules.'; gulp.task('scripts:bundle', bundle);

We don’t need vinyl-named to rename the file anymore, we can do that as a callback of gulp-rollup-each. But we need a couple of extra plugins to correctly resolve npm module paths.

So for this we needed:

$ yarn add path gulp-rollup-each rollup-plugin-babel babel-preset-env rollup-plugin-node-resolve rollup-plugin-commonjs --dev

Rollup.js does still add a little bit of boilerplate JS but it’s a much more acceptable amount. Our JS tests all passed so that was a great sign. Page speed tests showed the slight improvement I was expecting, having bundled a few files together. We're still keeping the original transpile Gulp task too for ES6 files that don't include any imports, since they don't need to go through Rollup.js at all.

Webpack might still be the better option for more advanced things that a decoupled frontend might need, like Hot Module Replacement. But for simple or only slightly decoupled components Rollup.js is my pick.

Next steps

Some modern browsers can already support ES6 module imports, so this whole bundle step is becoming somewhat redundant. Ideally the bundled file with it’s overhead and old fashioned code is only used on those older browsers that can’t handle the new and improved syntax, and the modern browsers use straight ES6...

Luckily this is possible with a couple of script attributes. Our .bundle.js file can be included with the nomodule attribute, alongside the source ES6 file with a type=”module” attribute. Older browsers ignore the type=module file entirely because modules aren’t supported and browsers that can support modules ignore the ‘nomodule’ file because it told them to. This article explains it more.

Then we'll start replacing the jQuery entirely, even look at introducing a Javascript framework like React or Glimmer.js to the more interactive components to progressively decouple our front-ends!

Tagged JavaScript, ES6, Progressive Decoupling
Categories: FLOSS Project Planets

Continuum Analytics Blog: Anaconda Training: A Learning Path for Data Scientists

Planet Python - Mon, 2017-12-04 17:00
Here at Anaconda, our mission has always been to make the art of data science accessible to all. We strive to empower people to overcome technical obstacles to data analysis so they can focus on asking better questions of their data and solving actual, real-world problems. With this goal in mind, we’re excited to announce …
Read more →
Categories: FLOSS Project Planets

health @ Savannah: GNU Health patchset 3.2.9 released

GNU Planet! - Mon, 2017-12-04 16:52

Dear community

GNU Health 3.2.9 patchset has been released !

Priority: High

Table of Contents
  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of issues related to this patchset
About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-3.2.9.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU Health and Tryton kernel and modules using the GNU Health control center program.

Please refer to the administration manual section ( https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Summary of this patchset

Patch 3.2.9 mainly fixes issues with real time computation of fields in the evaluation, lab, and APACHE II score system

Minor view reordering on the WHR and BMI fields have been also applied.

Refer to the List of issues related to this patchset for a comprehensive list of fixed bugs.

Installation Notes

You must apply previous patchsets before installing this patchset. If your patchset level is 3.2.8, then just follow the general instructions.
You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you.

Follow the general instructions at

After applying the patches, make a full update of your GNU Health database as explained in the documentation.

  • Restart the GNU Health Tryton server
List of issues and tasks related to this patchset
  • bug #52580: Removing the patient field before saving the record generates an error
  • bug #52579: some on_change numeric method operations generate traceback
  • bug #52578: WHR should be on the same line as hip and waist fields

For detailed information about each issue, you can visit https://savannah.gnu.org/bugs/?group=health
For detailed information about each task, you can visit https://savannah.gnu.org/task/?group=health

For detailed information you can read about Patches and Patchsets

Categories: FLOSS Project Planets

Thomas Lange: FAI.me build server improvements

Planet Debian - Mon, 2017-12-04 15:59

Only one week ago, I've announced the FAI.me build service for creating your own installation images. I've got some feedback and people like to have root login without a password but using a ssh key. This feature is now available. You can upload you public ssh key which will be installed as authorized_keys for the root account.

You can now also download the configuration space that is used on the installation image and you can get the whole log file from the fai-mirror call. This command creates the partial package mirror. The log file helps you debugging if you add some packages which have conflicts on other packages, or if you misspelt a package name.


Categories: FLOSS Project Planets

Joey Hess: new old thing

Planet Debian - Mon, 2017-12-04 15:50

This branch came from a cedar tree overhanging my driveway.

It was fun to bust this open and shape it with hammer and chisels. My dad once recommended learning to chisel before learning any power tools for wood working.. so I suppose this is a start.

Some tung oil and drilling later, and I'm very pleased to have a nice place to hang my cast iron.

Categories: FLOSS Project Planets

Weekly Python Chat: Functions in Python

Planet Python - Mon, 2017-12-04 14:30

This week we'll discuss how Python's functions work differently than functions in C, Java, JavaScript and many other programming languages. Sign up and ask a question before the chat begins!

Categories: FLOSS Project Planets

Dries Buytaert: We have 10 days to save net neutrality

Planet Drupal - Mon, 2017-12-04 13:51

Last month, the Chairman of the Federal Communications Commission, Ajit Pai, released a draft order that would soften net neutrality regulations. He wants to overturn the restrictions that make paid prioritization, blocking or throttling of traffic unlawful. If approved, this order could drastically alter the way that people experience and access the web. Without net neutrality, Internet Service Providers could determine what sites you can or cannot see.

The proposed draft order is disheartening. Millions of Americans are trying to save net neutrality; the FCC has received over 5 million emails, 750,000 phone calls, and 2 million comments. Unfortunately this public outpouring has not altered the FCC's commitment to dismantling net neutrality.

The commission will vote on the order on December 14th. We have 10 days to save net neutrality.

Although I have written about net neutrality before, I want to explain the consequences and urgency of the FCC's upcoming vote.

What does Pai's draft order say?

Chairman Pai has long been an advocate for "light touch" net neutrality regulations, and claims that repealing net neutrality will allow "the federal government to stop micromanaging the Internet".

Specifically, Pai aims to scrap the protection that classifies ISPs as common carriers under Title II of the Communications Act of 1934. Radio and phone services are also protected under Title II, which prevents companies from charging unreasonable rates or restricting access to services that are critical to society. Pai wants to treat the internet differently, and proposes that the FCC should simply require ISPs "to be transparent about their practices". The responsibility of policing ISPs would also be transferred to the Federal Trade Commission. Instead of maintaining the FCC's clear-cut and rule-based approach, the FTC would practice case-by-case regulation. This shift could be problematic as a case-by-case approach could make the FTC a weak consumer watchdog.

The consequences of softening net neutrality regulations

At the end of the day, frail net neutrality regulations mean that ISPs are free to determine how users access websites, applications and other digital content.

It is clear that depending on ISPs to be "transparent" will not protect against implementing fast and slow lanes. Rolling back net neutrality regulations means that ISPs could charge website owners to make their website faster than others. This threatens the very idea of the open web, which guarantees an unfettered and decentralized platform to share and access information. Gravitating away from the open web could create inequity in how communities share and express ideas online, which would ultimately intensify the digital divide. This could also hurt startups as they now have to raise money to pay for ISP fees or fear being relegated to the "slow lane".

The way I see it, implementing "fast lanes" could alter the technological, economic and societal impact of the internet we know today. Unfortunately it seems that the chairman is prioritizing the interests of ISPs over the needs of consumers.

What can you can do today

Chairman Pai's draft order could dictate the future of the internet for years to come. In the end, net neutrality affects how people, including you and me, experience the web. I've dedicated both my spare time and my professional career to the open web because I believe the web has the power to change lives, educate people, create new economies, disrupt business models and make the world smaller in the best of ways. Keeping the web open means that these opportunities can be available to everyone.

If you're concerned about the future of net neutrality, please take action. Share your comments with the U.S. Congress and contact your representatives. Speak up about your concerns with your friends and colleagues. Organizations like The Battle for the Net help you contact your representatives — it only takes a minute!

Now is the time to stand up for net neutrality: we have 10 days and need everyone's help.

Categories: FLOSS Project Planets

Seventeen new GNU releases in the month of November

FSF Blogs - Mon, 2017-12-04 11:43

(as of November 24, 2017):

For announcements of most new GNU releases, subscribe to the info-gnu mailing list: https://lists.gnu.org/mailman/listinfo/info-gnu.

To download: nearly all GNU software is available from https://ftp.gnu.org/gnu/, or preferably one of its mirrors from https://www.gnu.org/prep/ftp.html. You can use the URL https://ftpmirror.gnu.org/ to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

This month, we welcome Bertrand Garrigues as a maintainer of GNU Groff.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see https://www.gnu.org/server/takeaction.html#unmaint if you'd like to help. The general page on how to help GNU is at https://www.gnu.org/help/help.html.

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see https://www.gnu.org/help/evaluation.html.

As always, please feel free to write to us at maintainers@gnu.org with any GNUish questions or suggestions for future installments.

Categories: FLOSS Project Planets

Acquia Lightning Blog: Migrating to Content Moderation with Lightning

Planet Drupal - Mon, 2017-12-04 11:38
Migrating to Content Moderation with Lightning Adam Balsam Mon, 12/04/2017 - 11:38

NOTE: This blog post is about a future release of Lightning. Lightning 2.2.4, with the migration path to Content Moderation, will be released Wednesday, December 6th.

The second of two major migrations this quarter is complete! Lightning 2.2.4 will migrate you off of Workbench Moderation and onto Core Workflows and Content Moderation. (See our blog post about Core Media, our first major migration.)

The migration was a three-headed beast:

  1. The actual migration which included migrating states and transitions into Workflows and migrating the states of individual entities into Content Moderation.
  2. Making sure other Lightning Workflow features continued to work with Content Moderation, including the ability to scheduled state transitions for content.
  3. Feature parity between Workbench Moderation and Content Moderation.
Tryclyde - the three-headed CM migration beastThe actual migration

Content Moderation was not a direct port of Workbench Moderation. It introduced the concept of Workflows which abstracts states and transitions from Content Moderation. As a result, the states and transitions that users had defined in WBM might not easily map to Workflows - especially if different content types have different states available.

To work around this, the migrator generates a hash of all available states per content type; then groups content types with identical hashes into Workflows. As an example, a site with the following content types and states would result in three Workflows as indicated by color:

WMB states/transition mapping to Workflows

The second half of the migration was making sure all existing content retained the correct state. Early prototypes used the batch API to process states, but this quickly because unscalable. In the end, we used the Migrate module to:

  1. Store the states of all entities and then remove them from the entities themselves.
  2. Uninstall Workbench Moderation and install Workflows + Content Moderation.
  3. Map the stored states back to their original entities as Content Moderation fields.

Note: This section of Lightning migration was made available as the contrib module WBM2CM. The rest of the migration is Lightning-specific.

Other Lightning Workflow features

Lightning Workflow does more than just provide states. Among other things, it also allows users to schedule state transitions. We have used the Scheduled Updates module for this since its introduction. Unfortunately, Scheduled Updates won't work with the computed field that is provided by Content Moderation. As a result, we ended up building a scheduler into Lightning Workflow itself.

Scheduled Updates is still appropriate and recommended for more complex scheduling - like for body fields or taxonomy term names. But for the basic content state transitions (i.e., publish this on datetime) you can use native Lightning Workflow.

As an added bonus, we sidestep a nasty translation bug (feature request?) that has been giving us problems with Scheduled Updates.

Feature parity

While Workflows is marked as stable in Core, Content Moderation is still in beta. This is partially because it's still missing some key features and integrations that Lightning uses. Specifically, Lightning has brought in patches and additional code so that we can have basic integration between Content Moderation ↔ Views and Content Moderation ↔ Quick Edit.

Want to try it out?

Assuming a standard Composer setup, you can update to the latest Lightning with the following. The migration is included in Lightning 2.2.4 and above:

$ composer update acquia/lightning --with-dependencies

Once you have updated your code, you can have Lightning automatically apply all pending updates, including the Content Moderation migration with the following (recommended):

$ /path/to/console/drupal update:lightning --no-interaction

Or you can just enable the WBM2CM module manually and trigger the migration with:

$ drush wbm2cm-migrate


Categories: FLOSS Project Planets

Mediacurrent: Annotate to Communicate

Planet Drupal - Mon, 2017-12-04 10:34

Someone once said, “if you have to explain the joke, it takes the fun out of it.” Well, the same can be said for designing a website. Explaining the important and sometimes technical details can be a tedious process many designers would avoid if possible. But when it comes to communicating the form and function of the user experience through wireframes, explaining each element can make or break the project. It’s always a good idea to include annotations.

Categories: FLOSS Project Planets

Django Weekly: Issue 63: Django 2.0 is out, Multitenant App, Free Apache Spark Guide, Django Logging and more

Planet Python - Mon, 2017-12-04 10:30
Worthy Read
Continuous Delivery: GoCD VS SpinnakerGoCD or Spinnaker? This post is an overview of GoCD and Spinnaker, why they are different from each other and which problems you should use them to solve.
Django 2.0 release notes | Django documentation | DjangoWorth nothing that 2.0 onwards Python 2.7 series will not be supported. Simplified URL routing, Mobile friendly contrib.admin, Window expression and more are supported. Have a look.
new release
Building TwilioQuest with Twilio Sync, Django, and Vue.jsTwilioQuest is our developer training curriculum disguised as a retro-style video game. While you learn valuable skills for your day job, you get to earn XP and wacky loot to equip on your 8-bit avatar.Today we’ll pull back the curtain and show the code that the Developer Education team wrote to create TwilioQuest.
Scaling out your Django Multi-tenant AppOur django-multitenant Python library, enables easy scale out of applications that are built on top of Django and follow a multi tenant data model. This Python library has evolved from our experience working with SaaS customers, scaling out their multi-tenant apps.
Testing Models with Django using Pytest and Factory BoyPytest and Factory Boy make a rad combo for testing Django Applications. We can test models by arranging our models as factories and running testing logic on them.
testing, factory
Free Apache Spark™ GuideThe Definitive Guide to Apache Spark. Download today!.
Introduction to Django ChannelsDjango channels’ goal is to extend the Django framework, adding to it a new layer to handle the use of WebSockets and background tasks.
HellosignEmbed docs directly on your website with a few lines of code. Test the API for free.
How to debug in Django?Many people think that throughout our professional career, developers spend most of their time troubleshooting bugs in applications that we implement or reverse engineer, so one of our biggest allies should be debuggers. From BeDjango we will discuss a series of methods and utilities, some very simple and others more complex, to help you in your Django projects, although some of the following tools can be used without problemseasily in any Python project.
Django Logging, The Right WayGood logging is critical to debugging and troubleshooting problems. Not only is it helpful in local development, but in production it’s indispensable. When reviewing logs for an issue, it’s rare to hear somebody say, “We have too much logging in our app.” but common to hear the converse. So, with that in mind, let’s get started.
File Mime types in DjangoPython has a mimetype package that can be used to check the mime of a file.
Demystifying Django's Import Strings: How Context Processors And Apps Settings WorkThis post aims to dispell some ambiguities introduced by Django for those learning it. These things initially perplexed me. I found it hard to grasp and articulate, and for my first years in Python, they were akin to magic.
Configure Django to log exceptions in productionlogging, exception

django-subadmin - 59 Stars, 2 ForkA special kind of ModelAdmin that allows it to be nested within another ModelAdmin
react-django-pwa-kit - 0 Stars, 0 ForkBoilerplate for React + Django + PWA.
silverstrike - 0 Stars, 0 ForkWebapp based on Django to manage personal finances.
pygmy - 0 Stars, 0 ForkOpen-source, extensible & easy-to-use URL shortener. It's very easy to host and run. It's created keeping in mind that it should be easy to have your custom URL shortener up and running without much effort.
django_material_widgets - 0 Stars, 0 ForkEasily convert your Django Forms and ModelForms to use widgets styled with Material Components for the Web.
codejobs - 0 Stars, 0 ForkAn open job listing API to build job listing client apps for web, mobile, desktop, etc for tech related jobs
django-vuejs-boilerplate - 0 Stars, 0 ForkA boilerplate for SPA applications with Django as Backend and VueJS as Frontend with hot loading. It uses django-webpack-loader to link bundles.
Categories: FLOSS Project Planets

LakeDrops Drupal Consulting, Development and Hosting: Welcome Matthias

Planet Drupal - Mon, 2017-12-04 09:50
Welcome Matthias Jürgen Haas Mon, 12/04/2017 - 15:50

We are so glad to announce that Matthias Walti decided to join LakeDrops. He brings skills and experience in building e-commerce solutions, is a user experience expert and is well known for writing great content which is driven by his marketing background.

Categories: FLOSS Project Planets

Holger Levsen: 20171204-qubes-mirage-firewall

Planet Debian - Mon, 2017-12-04 09:37
On using QubesOS MirageOS firewall

So I'm lucky to attend the 4th MirageOS hack retreat in Marrakesh this week, where I learned to build and use qubes-mirage-firewall, which is a MirageOS based (system) firewall for Qubes OS. The main visible effect is that this unikernel only needs 32 megabytes of memory, while a Debian (or Fedora) based firewall systems needs half a gigabyte. It's also said to be more secure, but I have not verified that myself

In the spirit of avoiding overhead I decided not to build with docker as the qubes-mirage-firewall's README.md suggests, but rather use a base Debian stretch system. Here's how to build natively:

sudo apt install git opam aspcud curl debianutils m4 ncurses-dev perl pkg-config time git clone https://github.com/talex5/qubes-mirage-firewall cd qubes-mirage-firewall/ opam init # the next line is super useful if there is bad internet connectivity but you happen to have access to a local mirror # opam repo add local opam switch 4.04.2 eval `opam config env` ## in there: opam install -y vchan xen-gnt mirage-xen-ocaml mirage-xen-minios io-page mirage-xen mirage mirage-nat mirage-qubes netchannel mirage configure -t xen make depend make tar

Then follow the instructions in the README.md and switch some AppVMs to it, and then make it the default and shutdown the old firewall, if you are happy with the results, which currently I'm not sure I am because it doesn't allow updating template VMs...

Update: qubes-mirage-firewall allows this. Just the crashed qubes-updates-proxy service in sys-net prevented it, but that's another bug elsewhere.

I also learned that it builds reproducibly given the same build path and ignoring the issue of timestamps in the generated tarball, IOW, the unikernel (and the 3 other files) inside the tarball is reproducible. And I still need to compare a docker build with a build done the above way & and I really don't like having to edit the firewalls rules.ml file and then rebuilding it. More on this in another post later, hopefully.

Oh, I didn't mention it and won't say more here, but this hack retreat and it's organisation is marvellous! Many thanks to everyone here!

Categories: FLOSS Project Planets

Caktus Consulting Group: Caktus is Excited about Django 2.0

Planet Python - Mon, 2017-12-04 09:30

Did you know Django 2.0 is out? The development team at Caktus knows and we’re excited! You should be excited too if you work with or depend on Django. Here’s what our Cakti have been saying about the recently-released 2.0 beta.

What are Cakti Excited About?

Django first supported Python 3 with the release of version 1.5 back in February 2014. Adoption of Python 3 has only grown since then and we’re ready for the milestone that 2.0 marks: dropping support for Python 2. Legacy projects that aren’t ready to make the jump can still enjoy the long-term support of Django 1.11 on Python 2, of course.

With the removal of Python 2 support, a lot of Django’s internals have been simplified and cleaned up, no longer needing to support both major variants of Python. We’ve put a lot of work into moving our own projects forward to Python 3 and it’s great to see the wider Django community moving forward, too.

In more concrete changes, some Caktus devs are enthused by transitions Django is making away from positional arguments, which can be error-prone. Among the changes are the removal of optional positional arguments from form fields, removal of positional arguments form indexes entirely, and the addition of keyword-only arguments to custom template tags.

Of course, the new responsive and mobile-friendly admin is a much-anticipated feature! Django’s admin interface has always been a great out-of-the-box way to give staff and client users quick access to the data behind the sites we build with it. It can be a quick way to provide simple behind-the-scenes interfaces to control a wide variety of site content. Now it extends that accessibility to use on the go.

What are Cakti Cautious About?

While we’re excited about a Python 3-only Django, the first thing on our list of cautions about the new release is also dropping support for Python 2. We’ve been upgrading a backlog of our own Django apps to support Python 3 in preparation, but our projects depend on a wide range of third-party apps among which we know we’ll find holdouts. That’s going to mean finding alternatives, pushing pull requests, and even forking some things to get them forward for any project we want to move to Django 2.0.

Is There Anything Cakti Actually Dislike?

While there’s a lot to be excited about, every big change has its costs and its risks. There are certainly upsets in the Django landscape we wish had gone differently, even if we would never consider them reasons to avoid the new release.

Requiring ForeignKey’s on_delete parameter

Some of us dislike the new requirement that the on_delete option to ForeignKey fields be explicit. By default, Django has always used the CASCADE rule to handle what happens when an object is deleted and other objects have references to it, causing the whole chain of objects to be deleted together to avoid broken state. There have also been other on_delete options for other behaviors like prohibiting such deletions or setting the references to None when the target is deleted. As of Django 2.0, the on_delete no longer defaults to CASCADE and you must pick an option explicitly.

While there are some benefits to the change, one of the most unfortunate results is that updating to Django 2.0 means updating all of your models with an explicit on_delete choice…including the entire history of your migrations, even the ones that have already been run, which will no longer be compatible without the update.

Adding a Second URL Format

A new URL format is now available. It offers a much more readable and understandable format than the old regular-expression based URL patterns Django has used for years. This largely a welcome change that will make Django more accessible to newcomers and projects easier to maintain.

However, the new format is introduced in addition to the old-style regular-expression version of patterns. You can use the new style in new or existing projects, and you can make the choice to replace all your existing patterns with the cleaner style, but you’ll have to continue to contend with third-party apps that won’t make the change. If you have a sufficiently large enough project, there’s a good chance you’ll forgo migrating all your URL patterns.

Maybe this will improve with time, but for now, we’ll have to deal with the cognitive cost of both formats in our projects.

In Conclusion

Caktus is definitely ready to continue moving our client and internal projects forward with major Django releases. We have been diligently migrating projects between LTS releases. Django 2.0 will be an important stepping stone to the next LTS after 1.11, but we won’t wait until then to start learning and experimenting with these changes for projects both big and small.

Django has come a long way and Caktus is proud to continue to be a part of that.

Categories: FLOSS Project Planets

Doug Hellmann: sched — Timed Event Scheduler — PyMOTW 3

Planet Python - Mon, 2017-12-04 09:00
The sched module implements a generic event scheduler for running tasks at specific times. The scheduler class uses a time function to learn the current time, and a delay function to wait for a specific period of time. The actual units of time are not important, which makes the interface flexible enough to be used …
Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Charles R Harris

Planet Python - Mon, 2017-12-04 08:30

This week we welcome Charles R. Harris as our PyDev of the Week. Charles is a core developer of NumPy, one of Python’s most popular scientific computing libraries. He has been working on NumPy since it was still called Numeric. He is also a core developer of SciPy. Let’s take some time to get to know Charles better!

Can you tell us a little about yourself (hobbies, education, etc):

I have an undergraduate degree in physics and a doctorate in mathematics. Like many with similar backgrounds, I ended up doing many things not directly related: optical and electrical design, spectroscopy, data analysis, and programming. My final niche before retirement was as the mathematical go to guy for the engineers that I worked with. Now that I am retired my hobbies are few, mostly reading science fiction and fantasy with a sprinkling of math and physics texts.

Why did you start using Python?

When I was finishing up my doctorate, one of the other students wanted to publish a group graph on the net. He experimented with Java, which was not as portable as advertised, and ended up in Python, which he recommended as a coming thing. That was about 1999. After graduation and a return to work I tried various languages out of curiosity and found Python very pleasant to work with. Not only that, but for my job the alternatives to Python were Matlab and IDL, and neither supported things that I wanted to use: genetic algorithms, graphs, specialty digital filter designs, Hermite normal forms of integer matrices, and various other things not normally considered part of numerical analysis. Mathematica would have been an option, but it was very expensive at the time, and rather clunky. So I ended up with Numeric and Python because that was a pleasant environment in which to write my own algorithms, cost nothing, and I could make contributions to fix or add things that I wanted. The ability to make contributions was a big selling point, as over the years I had found myself rewriting the same darn functions for every new language and project, and having some of them in a more permanent, public location and written in a popular language meant that I didn’t need to do that anymore.

What other programming languages do you know and which is your favorite?

It is pretty much C and Python at this point. Over the years I’ve worked in several assembly languages, C, C++, and Fortran 77. For everyday programming it is Python all the way.

What projects are you working on now?

I am the most senior NumPy maintainer, and that is pretty much it. I see my job as fetching metaphorical pizza and coffee for the bright young’uns who do the real work.

Which Python libraries are your favorite (core or 3rd party)?

NumPy, SciPy, and Matplotlib are what I’ve used most over the years. SymPy, mpmath, and scikit-image have also been useful on occasion. And I have the feeling that someday I should really take a look at scikit-learn.

How did you get involved with NumPy?

When I got involved it was still Numeric and SciPy. My initial contributions were the 1D zero solvers in SciPy along with a fix to the random number package of Numeric, followed by the type specific sorting routines in Numarray that were later taken over by NumPy. Those all filled a personal need, which is one of the nice things about contributing to open source. My initial involvement in NumPy itself was motivated by the simple desire to break up the big glob of code that constituted its initial form and make the coding style something readable. I thought making the code more accessible would serve as developer bait. Here, kitty, kitty, kitty.

What lessons have you learned from working on this open source project?

That nothing beats longevity. I am not the most talented developer who has worked on the project, but I’ve been there a long time and that has its own special quality. Over the years NumPy has benefited enormously from being consistently maintained and updated, which in turn helps recruit new talent. It is a virtuous circle.

Is there anything else you’d like to say?


Categories: FLOSS Project Planets

Jim Jagielski: The Path to Apache OpenOffice 4.2.0

Planet Apache - Mon, 2017-12-04 07:57

It is no secret that, for awhile at least, Apache OpenOffice had lost its groove.

Partly it was due to external issues. Mostly that the project and the committers were spending a lot of their time and energies battling and correcting the FUD associated around the project. Nary a week would go by without the common refrain "OpenOffice is Dead. Kill it already!" and constant (clueless) rehashes of the history between OpenOffice and LibreOffice. With all that, it is easy and understandable to see why morale within the AOO community would have been low. Which would then reflect and affect development on the project itself.

So more so than anything, what the project needed was a good ol' shot of adrenaline in the arm and some encouragement to keep the flame alive. Over the last few months this has succeeded beyond our dreams. After an admittedly way-too-long period, we finally released AOO 4.1.4. And we are actively working on not only a 4.1.5 release but also preparing plans for our 4.2.0 release.

And it's there that you can help.

Part of what AOO really wants to be is a simple, easy-to-user, streamlined office suite for the largest population of people possible. This includes supporting old and long deprecated OSs. For example, our goal is to continue to support Apple OSX 10.7 (Lion) with our 4.2.0 release. However, there is one platform which we are simply unsure about what to do, and how to handle it. And what makes it even more interesting is that it's our reference build system for AOO 4.1.x: CentOS5

Starting with AOO 4.2.0, we are defaulting to GIO instead of Gnome VFS. The problem is that CentOS5 doesn't support GIO, which means that if we continue with CentOS5 as our reference build platform for our community builds, then all Linux users who use and depend on those community builds will be "stuck" with Gnome VFS instead of GIO. If instead we start using CentOS6 as our community build server. we leave CentOS5 users in a lurch (NOTE: CentOS5 users would still be able to build AOO 4.2.0 on their own, it's just that the binaries that the AOO project supplies won't work). So we are looking at 3 options:

  1. We stick w/ CentOS5 as our ref build system for 4.2.0 but force Gnome VFS.
  2. We move to CentOS6, accept the default of GIO but understand that this moves CentOS5 as a non-supported OS for our community builds.
  3. Just as we offer Linux 32 and 64bit builds, starting w/ 4.2.0 we offer CentOS5 community builds (w/ Gnome VFS) IN ADDITION TO CentOS6 builds (w/ GIO). (i.e.: 32bit-Gnome VFS, 64bit-Gnome VFS, 32bit-GIO, 64bit-GIO).

Which one makes the most sense? Join the conversation and the discussion on the AOO dev mailing list!

Categories: FLOSS Project Planets

Agaric Collective: Change the text field maximum length in Drupal 8

Planet Drupal - Mon, 2017-12-04 07:39

Once a text field has data stored, it is not very easy or obvious how to change its maximum length. In the UI there is a message warning you that the field cannot be changed, because there is existing data. Sometimes it is necessary to change these values. It seems that there are a few ways and some resources to do this in Drupal 7, but I could not find a way to do this in Drupal 8. I decided to create a small function to do it:

Caution: Any change in the database needs to be done carefully. Before you continue please create a backup of your database.

/** * Update the length of a text field which already contains data. * * @param string $entity_type_id * @param string $field_name * @param integer $new_length */ function _module_change_text_field_max_length ($entity_type_id, $field_name, $new_length) { $name = 'field.storage.' . $entity_type_id . "." . $field_name; // Get the current settings $result = \Drupal::database()->query( 'SELECT data FROM {config} WHERE name = :name', [':name' => $name] )->fetchField(); $data = unserialize($result); $data['settings']['max_length'] = $new_length; // Write settings back to the database. \Drupal::database()->update('config') ->fields(array( 'data' => serialize($data))) ->condition('name', $name) ->execute(); // Update the value column in both the _data and _revision tables for the field $table = $entity_type_id . "__" . $field_name; $table_revision = $entity_type_id . "_revision__" . $field_name; $new_field = ['type' => 'varchar', 'length' => $new_length]; $col_name = $field_name . '_value'; \Drupal::database()->schema()->changeField($table, $col_name, $col_name, $new_field); \Drupal::database()->schema()->changeField($table_revision, $col_name, $col_name, $new_field); // Flush the caches. drupal_flush_all_caches(); }

This method needs the name of the entity, the name of the field, and the name and the new length.

And we can use it like this:

_module_change_text_field_max_length('node', 'field_text', 280);

Usually, this code should be placed in (or called from) a hook_update so it will be executed automatically in the update.

And if the new length is too long to be placed in a regular input area, you can use the Textarea widget for text fields which will allow you to use the larger text area form element for text fields.

Categories: FLOSS Project Planets