FLOSS Project Planets

Jeff Geerling's Blog: Use a Drupal 8 BLT project with Drupal VM on Windows 7 or Windows 8

Planet Drupal - Wed, 2017-03-22 19:09

Windows 10 is the only release Acquia's BLT officially supports. But there are still many people who use Windows 7 and 8, and most of these people don't have control over what version of Windows they use.

Drupal VM has supported Windows 7, 8, and 10 since I started building it a few years ago (at that time I was still running Windows 7), and using a little finesse, you can actually get an entire modern BLT-based Drupal 8 project running on Windows 7 or 8, as long as you do all the right things, as will be demonstrated in this blog post.

Categories: FLOSS Project Planets

C++ Concepts TS for getting functions as arguments, and the book discount

Planet KDE - Wed, 2017-03-22 19:06

One of my pet peeves with teaching FP in C++ is that if we want to have efficient code, we need to catch functions and other callable objects as template arguments.

Because of this, we do not have function signatures that are self-documenting. Consider a function that outputs items that satisfy a predicate to the standard output:

template <typename Predicate> void write_if(const std::vector<int> &xs, Predicate p) { std::copy_if(begin(xs), end(xs), std::ostream_iterator<int>(std::cout, " ")); p); }

We see that the template parameter is named Predicate, so we can imply that it needs to return a bool (or something convertible to bool), and we can deduce from the function name and the type of the first argument that it should be a function that takes an int.

This is a lot of reasoning just to be able to tell what we can pass to the function.

For this reason, Bartosz uses std::function in his blog posts – it tells us exactly which functions we can pass in. But std::function is slow.

So, we either need to have a bad API or a slow API.

With concepts, the things will change.

We will be able to define a really short (and a bit dirty) concept that will check whether the functions we get are of the right signature:

Edit: Changed the concept name to Callable to fit the naming in the standard [func.def] since it supports any callable, not just function objects

template <typename F, typename CallableSignature> concept bool Callable = std::is_convertible<F, std::function<CallableSignature>>::value; void foo(Callable<int(int)> f) // or Callable<auto (int) -> int> { std::cout << std::invoke(f, 42) << std::endl; }

We will be able to call foo with any callable that looks like a int-to-int function. And we will get an error ‘constraint Callable<int(int)> is not satisfied’ for those that do not have the matching signature.

An alternative approach is to use std::is_invocable type trait (thanks Agustín Bergé for writing the original proposal and pointing me to it). It will provide us with a cleaner definition for the concept, though the usage syntax will have to be a bit different if we want to keep the concept definition short and succulent.

template <typename F, typename R, typename ...Args> concept bool Callable = std::is_invocable<R, F, Args...>::value; void foo(Callable<int, int> f) { std::cout << std::invoke(f, 42) << std::endl; }

When we get concepts (C++20, hopefully), we will have the best of both worlds – we will have an optimal way to accept callable objects as function arguments, and not sacrificing the API to do it.

Book discount

Today, Functional Programming in C++ is again the Deal of the Day – you get half off if you use the code dotd032317au at cukic.co/to/manning-dotd

Categories: FLOSS Project Planets

Tarek Ziade: Load Testing at Mozilla

Planet Python - Wed, 2017-03-22 19:00

After a stabilization phase, I am happy to announce that Molotov 1.0 has been released!

(Logo by Juan Pablo Bravo)

This release is an excellent opportunity to explain a little bit how we do load testing at Mozilla, and what we're planning to do in 2017 to improve the process.

I am talking here specifically about load testing our HTTP services, and when this blog post mentions what Mozilla is doing there, it refers mainly to the Mozilla QA team, helped with Services developers team that works on some of our web services.

What's Molotov?

Molotov is a simple load testing tool

Molotov is a minimalist load testing tool you can use to load test an HTTP API using Python. Molotov leverages Python 3.5+ asyncio and uses aiohttp to send some HTTP requests.

Writing load tests with Molotov is done by decorating asynchronous Python functions with the @scenario function:

from molotov import scenario @scenario(100) async def my_test(session): async with session.get('http://localhost:8080') as resp: assert resp.status == 200

When this script is executed with the molotov command, the my_test function is going to be repeatedly called to perform the load test.

Molotov tries to be as transparent as possible and just hands over session objects from the aiohttp.client module.

The full documentation is here: http://molotov.readthedocs.io

Using Molotov is the first step to load test our services. From our laptops, we can run that script and hammer a service to make sure it can hold some minimal charge.

What Molotov is not

Molotov is not a fully-featured load testing solution

Load testing application usually comes with high-level features to understand how the tested app is performing. Things like performance metrics are displayed when you run a test, like what Apache Bench does by displaying how many requests it was able to perform and their average response time.

But when you are testing web services stacks, the metrics you are going to collect from each client attacking your service will include a lot of variation because of the network and clients CPU overhead. In other words, you cannot guarantee reproducibility from one test to the other to track precisely how your app evolves over time.

Adding metrics directly in the tested application itself is much more reliable, and that's what we're doing these days at Mozilla.

That's also why I have not included any client-side metrics in Molotov, besides a very simple StatsD integration. When we run Molotov at Mozilla, we mostly watch our centralized metrics dashboards and see how the tested app behaves regarding CPU, RAM, Requests-Per-Second, etc.

Of course, running a load test from a laptop is less than ideal. We want to avoid the hassle of asking people to install Molotov & all the dependencies a test requires everytime they want to load test a deployment -- and run something from their desktop. Doing load tests occasionally from your laptop is fine, but it's not a sustainable process.

And even though a single laptop can generate a lot of loads (in one project, we're generating around 30k requests per second from one laptop, and happily killing the service), we also want to do some distributed load.

We want to run Molotov from the cloud. And that's what we do, thanks to Docker and Loads.

Molotov & Docker

Since running the Molotov command mostly consists of using the right command-line options and passing a test script, we've added in Molotov a second command-line utility called moloslave.

Moloslave takes the URL of a git repository and will clone it and run the molotov test that's in it by reading a configuration file. The configuration file is a simple JSON file that needs to be at the root of the repo, like how you would do with Travis-CI or other tools.

See http://molotov.readthedocs.io/en/latest/slave

From there, running in a Docker can be done with a generic image that has Molotov preinstalled and picks the test by cloning a repo.

See http://molotov.readthedocs.io/en/latest/docker

Having Molotov running in Docker solves all the dependencies issues you can have when you are running a Python app. We can specify all the requirements in the configuration file and have moloslave installs them. The generic Docker image I have pushed in the Docker Hub is a standard Python 3 environment that works in most case, but it's easy to create another Docker image when a very specific environment is required.

But the bottom line is that anyone from any OS can "docker run" a load test by simply passing the load test Git URL into an environment variable.

Molotov & Loads

Once you can run load tests using Docker images, you can use specialized Linux distributions like CoreOS to run them.

Thanks to boto, you can script the Amazon Cloud and deploy hundreds of CoreOS boxes and run Docker images in them.

That's what the Loads project is -- an orchestrator that will run hundreds of CoreOS EC2 instances to perform a massively distributed load test.

Someone that wants to run such a test has to pass to a Loads Broker that's running in the Amazon Cloud a configuration that tells where is the Docker that runs the Molotov test, and says for how long the test needs to run.

That allows us to run hours-long tests without having to depend on a laptop to orchestrate it.

But the Loads orchestrator has been suffering from reliability issues. Sometimes, EC2 instances on AWS are not responsive anymore, and Loads don't know anymore what's happening in a load test. We've suffered from that and had to create specific code to clean up boxes and avoid keeping hundreds of zombie instances sticking around.

But even with these issues, we're able to perform massive load tests distributed across hundreds of boxes.

Next Steps

At Mozilla, we are in the process of gradually switching all our load testing scripts to Molotov. Using a single tool everywhere will allow us to simplify the whole process that takes that script and performs a distributed load test.

I am also investigating on improving metrics. One idea is to automatically collect all the metrics that are generated during a load test and pushing them in a specialized performance trend dashboard.

We're also looking at switching from Loads to Ardere. Ardere is a new project that aims at leveraging Amazon ECS. ECS is an orchestrator we can use to create and manage EC2 instances. We've tried ECS in the past, but it was not suited to run hundreds of boxes rapidly for a load test. But ECS has improved a lot, and we started a prototype that leverages it and it looks promising.

For everything related to our Load testing effort at Mozilla, you can look at https://github.com/loads/

And of course, everything is open source and open to contributions.

Categories: FLOSS Project Planets

Nick Kew: The right weapon

Planet Apache - Wed, 2017-03-22 17:04

Today’s terrorist attack in London seems to have been in the worst tradition of slaughtering the innocent, but pretty feeble in its token attempt on the more noble target of Parliament.  This won’t become a Grand Tradition like Catesby’s papists’ attack.

But if we accept that the goal was slaughter of the innocent, then today’s perpetrator made a better job of it than most have done, at least since the days of the IRA, with their deep-pocketed US backers and organised paramilitary structure.  His weapon of choice was the obvious one for the purpose, having far more destructive power than many that are subject to heavy security theatre and sometimes utterly ridiculous restrictions.  Even some of those labelled “weapons of mass destruction”.

The car.  The weapon that is available freely to everyone, no questions asked.  The weapon no government dare restrict.  The weapon that kills more than all others, yet where it’s so rare as to be newsworthy for any perpetrator to be meaningfully punished.  Would the 5/11 plotters have gone to such lengths with explosives if they’d had such effective weapons to hand?

With this weapon, the only limit on terrorist attacks is the number of terrorists.  No need for preparation and planning – the kind of thing that might attract the attention of police or spooks – just go ahead.

And next time we get a display of security theatre – like banning laptops on flights – we can point to the massive double-standards.

Categories: FLOSS Project Planets

Looking for a job ?

Planet KDE - Wed, 2017-03-22 16:40
KDE Project:

Are you looking for a C++/Qt/Linux developer job in Germany ?
Then maybe this is something for you: Sharp Reflections

I'm looking forward to hear from you. :-)

Categories: FLOSS Project Planets

Sooper Drupal Themes: Are you ready for Drupal 8?

Planet Drupal - Wed, 2017-03-22 16:30

Between the rush of product updates we're putting out lately, a moment of reflection...

Like many other Drupal shops and theme/product developers I've been taking it easy with major investment in D8. But times are changing. Now we are seeing a time where Google searches including Drupal 8 are more numerous than searches containing Drupal 7. This is by no means a guarantee that D8 is a clear winner but to me it is a sign of progress and it inspires enough confidence to push ahead with our Drupal 8 product upgrades. SooperThemes is on schedule to release our Drupal themes and modules on Drupal 8 soon and I'm sure it will be great for us and our customers.

2017 will be an interesting year for Drupal, a year in which Drupal 8 will really show whether it can be as popular as it's younger brother. The lines in the chart might be crossing but Drupal 8 some way to go before it is as popular as 7. Understanding that Drupal 8 is more geared towards developers one might say it never will, but I think that it's important for the open web that Drupal will stay competitive in the low end market. Start-ups like Tesla and SpaceX have demonstrated how Drupal can grow along with your business all the way towards IPO and beyond.

Is your business ready for Drupal 8?

Personally I think I will need a month or 2 before I can say I'm totally comfortable with shifting focus of development to Drupal 8. Most of my existing customers are on Drupal 7 and my Drupal 7 expertise and products will not be irrelevant any time soon. One thing that is holding me back is uncertainty about media library features in Drupal 8, I hope the D8media team will be successful with their awesome work that puts this critical feature set in core.

If you are a Drupal developer, themer, or business owner, how do you feel about Drupal 8? Are you getting more business for Drupal 8 than Drupal 7? How is your experience with training yourself or your staff to work with Drupal 8 and it's more object oriented code? 

Let me know in the comments if you have anything to share about what Drupal 8 means to you!

Categories: FLOSS Project Planets

Meet the LibrePlanet 2017 Speakers: Denver Gingerich

FSF Blogs - Wed, 2017-03-22 15:55

Would you tell us a bit about yourself?

I was born and raised in British Columbia, Canada, and although I currently live in the New York City area, I am undeniably a West Coast boy at heart. I was always an extremely quiet and shy kid, but had no problem making friends with computers. So naturally, my high school socializing involved a lot of LAN parties, which is where I discovered that installing Apache on GNU/Linux was MUCH easier than on Windows. That was where my interest in free software really began, and it has been a big part of my life ever since. When I'm not sitting at a computer, I love traveling, and generally being outdoors as much as possible—hiking and skiing are favourite pastimes, as well as exploring new places I have never been before. I am also a transit enthusiast; I love learning about the history of subway systems, transit networks and infrastructure, and trains of all kinds. I generally find it fascinating to learn about how things work, and how things came to be the way the are, and because of that, I often fall down Wikipedia rabbit holes. I will also eat just about anything, and never turn down a free conference T-shirt, no matter how hideous the colour.

How did you first become interested in having your cell phone be fully free?

I first got a cell phone number in mid-2009, but I didn't have a cell phone—the number was hosted by Google Voice. I was mostly able to use the number with free software (using email for SMS and SIP for calls) so I didn't think a lot about the freedom implications of cell phones then.

I purchased a Nokia N900 and used it when I wasn't near a computer. It still ran a lot of non-free software. Later I learned that the most significant piece of this non-free software was the baseband firmware.

A few years ago I started my transition away from all Google services. I wanted my computer to remain my primary device for SMS and calls, so I needed a Google Voice replacement. I tried to find an equivalent service, but could not find one. So I decided to write my own.

That led to the first version of Soprani.ca, which I use to this day. I've recently created a newer version of the software, called JMP, which is easier to use for the average person. Both allow a person to use phone features like SMS and calling without a cell phone (and thus without baseband firmware). And both are free software, licensed under the GNU Affero General Public License, version 3 or later.

I'm still interested in this topic because people still use phone numbers and cell phones, even though they have certain "reprehensible" features, as RMS puts it. I hope by showing people ways to communicate with cell phone users that do not require a baseband firmware that we can take back control of our communication from the cellular companies and proprietary firmware makers.

Is this your first LibrePlanet?

No, this will actually be my fifth LibrePlanet in a row! I'm looking forward to chatting with all the wonderful people that I know I'll find there, and hearing some great ideas for how we can advance the free software movement.

In particular, it is becoming increasingly difficult to buy a computer that will function with only free software. I've met people at past LibrePlanet conferences who are building their own hardware so they can continue to run exclusively free software (such as the EOMA68 CPU card). These efforts are critically important, since existing computer manufacturers will no longer create the hardware we need. I hope to learn more about these efforts and ways I can contribute to them so that we'll still be able to run free software even after the last ThinkPad without a Management Engine stops working.

How can we follow you on social media?

I'm @ossguy on many social media sites, including Pump.io and Twitter.

What is a skill or talent you have that you wish more people knew about?

My wife says that if stubbornness and perfectionism could be counted as Olympic sports, I would win all the gold medals... She is smarter and much better looking than me, so she is probably right.

Want to hear Denver and the other amazing speakers Join us March 25-26th for LibrePlanet 2017!

Edited for content and grammar.

Categories: FLOSS Project Planets

Valuebound: How to send custom formatted HTML mail in Drupal 8 using hook_mail_alter()

Planet Drupal - Wed, 2017-03-22 15:47

As you can understand from name itself it’s basically used to Alter an email created with drupal mail in D7/ MailManagerInterface->mail() in D8.  hook_mail_alter() allows modification of email messages which includes adding and/or changing message text, message fields, and message headers.

Email sent rather than drupal_mail() will not call hook_mail_alter(). All core modules use drupal_mail() & always a recommendation to use drupal_mail but it’s not mandatory.

Syntax: hook_mail_alter(&$message)


$message: Array containing the message data. Below are the Keys in this array include:

  • 'id': The id of the message.
  • 'to': The…
Categories: FLOSS Project Planets

Hook 42: Stanford Drupal Camp 2017 - Ready, Git Set, Go!

Planet Drupal - Wed, 2017-03-22 15:03

I fully embraced the motto “go big or go home” when I started to think about my first solo community presentation for Stanford Drupal Camp 2017. I wanted to force myself to learn a subject well enough that I could explain it to others. I like a challenge, so I set my eyes on understanding the fundamentals of Git. My presentation slides can be found here: https://legaudinier.github.io/Ready-Git-Set-Go/#.

Categories: FLOSS Project Planets

DataCamp: SciPy Cheat Sheet: Linear Algebra in Python

Planet Python - Wed, 2017-03-22 14:36

By now, you will have already learned that NumPy, one of the fundamental packages for scientific computing, forms at least for a part the fundament of other important packages that you might use used for data manipulation and machine learning with Python. One of those packages is SciPy, another one of the core packages for scientific computing in Python that provides mathematical algorithms and convenience functions built on the NumPy extension of Python. 

You might now wonder why this library might come in handy for data science. 

Well, SciPy has many modules that will help you to understand some of the basic components that you need to master when you're learning data science, namely, math, stats and machine learning. You can find out what other things you need to tackle to learn data science here. You'll see that for statistics, for example, a module like scipy.stats, etc. will definitely be of interest to you.

The other topic that was mentioned was machine learning: here, the scipy.linalg and scipy.sparse modules will offer everything that you're looking for to understand machine learning concepts such as eigenvalues, regression, and matrix multiplication...

But, what is maybe the most obvious is that most machine learning techniques deal with high-dimensional data and that data is often represented as matrices. What's more, you'll need to understand how to manipulate these matrices.  

That is why DataCamp has made a SciPy cheat sheet that will help you to master linear algebra with Python. 

Take a look by clicking on the button below:

You'll see that this SciPy cheat sheet covers the basics of linear algebra that you need to get started: it provides a brief explanation of what the library has to offer and how you can use it to interact with NumPy, and goes on to summarize topics in linear algebra, such as matrix creation, matrix functions, basic routines that you can perform with matrices, and matrix decompositions from scipy.linalg. Sparse matrices are also included, with their own routines, functions, and decompositions from the scipy.sparse module. 

(Above is the printable version of this cheat sheet)

:target:before { content:""; display:block; height:150px; margin:-150px 0 0; } h3 {font-weight:normal; } h4 { font-weight: lighter; } table { width: 100%; table-layout: fixed; } th { height: 50px; } th, td { padding: 5px; text-align: left;} tr {background-color:white} tr:hover {background-color: #f5f5f5} Python for Data-Science Cheat Sheet: SciPy - Linear Algebra SciPy

The SciPy library is one of the core packages for scientific computing that provides mathematical algorithms and convenience functions built on the NumPy extension of Python.

Asking For Help >>> help(scipy.linalg.diagsvd) Interacting With NumPy >>> import numpy as np >>> a : np.array([1,2,3]) >>> b : np.array([(1+5j,2j,3j), (4j,5j,6j)]) >>> c : np.array([[(1.5,2,3), (4,5,6)], [(3,2,1), (4,5,6)]]) Index Tricks >>> np.mgrid[0:5,0:5] Create a dense meshgrid >>> np.ogrid[0:2,0:2] >>> np.r_[3,[0]*5,-1:1:10j] Stack arrays vertically (row-wise) >>> np.c_[b,c] Shape Manipulation >>> np.transpose(b) Permute array dimensions >>> b.flatten() Flatten the array >>> np.hstack((b,c)) Stack arrays horizontally (column-wise) >>> np.vstack((a,b)) Stack arrays vertically (row-wise) >>> np.hsplit(c,2) Split the array horizontally at the 2nd index >>> np.vpslit(d,2) Polynomials >>> from numpy import poly1d >>> p : poly1d([3,4,5]) Create a polynomial object Vectorizing Functions >>> def myfunc(a):       if a ‹ 0:       return a*2       else:       return a/2 >>> np.vectorize(myfunc) Vectorize functions Type Handling >>> np.real(b) Return the real part of the array elements >>> np.imag(b) Return the imaginary part of the array elements >>> np.real_if_close(c,tol:1000) Return a real array if complex parts close to 0 >>> np.cast['f'](np.pi) Cast object to a data type Other Useful Functions >>> np.angle(b,deg:True) Return the angle of the complex argument >>> g : np.linspace(0,np.pi,num:5) Create an array of evenly spaced values (number of samples) >>> g [3:] +: np.pi >>> np.unwrap(g) >>> np.logspace(0,10,3) Create an array of evenly spaced values (log scale) >>> np.select([c<4],[c*2]) Return values from a list of arrays depending on conditions >>> misc.factorial(a) Factorial >>> misc.comb(10,3,exact:True) Combine N things taken at k time >>> misc.central_diff_weights(3) Weights for Np-point central derivative >>> misc.derivative(myfunc,1.0) Find the n-th derivative of a function at a point Linear Algebra

You'll use the linalg and sparse modules. Note thatscipy.linalg contains and expands onnumpy.linalg.

>>> from scipy import linalg, sparse Creating Matrices >>> A : np.matrix(np.random.random((2,2))) >>> B : np.asmatrix(b) >>> C : np.mat(np.random.random((10,5))) >>> D : np.mat([[3,4], [5,6]]) Basic Matrix Routines Inverse >>> A.I Inverse >>> linalg.inv(A) Inverse Transposition >>> A.T Tranpose matrix >>> A.H Conjugate transposition Trace >>> np.trace(A) Trace Norm >>> linalg.norm(A) Frobenius norm >>> linalg.norm(A,1) L1 norm (max column sum) >>> linalg.norm(A,np.inf) L inf norm (max row sum) Rank >>> np.linalg.matrix_rank(C) Matrix rank Determinant >>> linalg.det(A) Determinant Solving linear problems >>> linalg.solve(A,b) Solver for dense matrices >>> E : np.mat(a).T Solver for dense matrices >>> linalg.lstsq(F,E) Least-squares solution to linear matrix equation Generalized inverse >>> linalg.pinv(C) Compute the pseudo-inverse of a matrix (least-squares solver >>> linalg.pinv2(C) Compute the pseudo-inverse of a matrix (SVD) Creating Sparse Matrices >>> F : np.eye(3, k:1) Create a 2X2 identity matrix >>> G : np.mat(np.identity(2)) Create a 2x2 identity matrix >>> C[C > 0.5] : 0 >>> H : sparse.csr_matrix(C) Compressed Sparse Row matrix >>> I : sparse.csc_matrix(D) Compressed Sparse Column matrix >>> J : sparse.dok_matrix(A) Dictionary Of Keys matrix >>> E.todense() Sparse matrix to full matrix >>> sparse.isspmatrix_csc(A) Identify sparse matrix Sparse Matrix Routines Inverse >>> sparse.linalg.inv(I) Inverse Norm >>> sparse.linalg.norm(I) Norm Solving linear problems >>> sparse.linalg.spsolve(H,I) Solver for sparse matrices Sparse Matrix Functions >>> la, v : sparse.linalg.eigs(F,1) Eigenvalues and eigenvectors >>> sparse.linalg.svds(H, 2) SVD >>> sparse.linalg.expm(I) Sparse matrix exponential Matrix Functions Addition >>> np.add(A,D) Addition Subtraction >>> np.subtract(A,D) Subtraction Division >>> np.divide(A,D) Division Multiplication >>> A @ D Multiplication operator (Python 3) >>> np.multiply(D,A) Multiplication >>> np.dot(A,D) Dot product >>> np.vdot(A,D) Vector dot product >>> np.inner(A,D) Inner product >>> np.outer(A,D) Outer product >>> np.tensordot(A,D) Tensor dot product >>> np.kron(A,D) Kronecker product Exponential Functions >>> linalg.expm(A) Matrix exponential >>> linalg.expm2(A) Matrix exponential (Taylor Series) >>> linalg.expm3(D) Matrix exponential (eigenvalue decomposition) Logarithm Function >>> linalg.logm(A) Matrix logarithm Trigonometric Functions >>> linalg.sinm(D) Matrix sine >>> linalg.cosm(D) Matrix cosine >>> linalg.tanm(A) Matrix tangent Hyperbolic Trigonometric Functions >>> linalg.sinhm(D) Hypberbolic matrix sine >>> linalg.coshm(D) Hyperbolic matrix cosine >>> linalg.tanhm(A) Hyperbolic matrix tangent Matrix Sign Function >>> np.signm(A) Matrix sign function Matrix Square Root >>> linalg.sqrtm(A) Matrix square root Arbitrary Functions >>> linalg.funm(A, lambda x: x*x) Evaluate matrix function Decompositions Eigenvalues and Eigenvectors >>> la, v : linalg.eig(A) Solve ordinary or generalized eigenvalue problem for square matrix >>> l1, l2 : la Unpack eigenvalues >>> v[:,0] First eigenvector >>> v[:,1] Second eigenvector >>> linalg.eigvals(A) Unpack eigenvalues Singular Value Decomposition >>> U,s,Vh : linalg.svd(B) Singular Value Decomposition (SVD) >>> M,N : B.shape >>> Sig : linalg.diagsvd(s,M,N) Construct sigma matrix in SVD LU Decomposition >>> P,L,U : linalg.lu(C) LU Decomposition Sparse Matrix Decompositions >>> np.info(np.matrix)

PS. Don't miss our other Python cheat sheets for data science that cover NumpyScikit-LearnBokehPandas and the Python basics.

Categories: FLOSS Project Planets

Arturo Borrero González: IPv6 and CGNAT

Planet Debian - Wed, 2017-03-22 13:47

Today I ended reading an interesting article by the 4th spanish ISP regarding IPv6 and CGNAT. The article is in spanish, but I will translate the most important statements here.

Having a spanish Internet operator to talk about this subject is itself good news. We have been lacking any news regarding IPv6 in our country for years. I mean, no news from private operators. Public networks like the one where I develop my daily job has been offering native IPv6 since almost a decade…

The title of the article is “What is CGNAT and why is it used”.

They start by admiting that this technique is used to address the issue of IPv4 exhaustion. Good. They move on to say that IPv6 was designed to address IPv4 exhaustion. Great. Then, they state that ‘‘the internet network is not ready for IPv6 support’’. Also that ‘‘IPv6 has the handicap of many websites not supporting it’’. Sorry?

That is not true. If they refer to the core of internet (i.e, RIRs, interexchangers, root DNS servers, core BGP routers, etc) they have been working with IPv6 for ages now. If they refer to something else, for example Google, Wikipedia, Facebook, Twitter, Youtube, Netflix or any random hosting company, they do support IPv6 as well. Hosting companies which don’t support IPv6 are only a few, at least here in Europe.

The traffic to/from these services is clearly the vast majority of the traffic traveling in the wires nowaday. And they support IPv6.

The article continues defending CGNAT. They refer to IPv6 as an alternative to CGNAT. No, sorry, CGNAT is an alternative to you not doing your IPv6 homework.

The article ends by insinuing that CGNAT is more secure and useful than IPv6. That’s the final joke. They mention some absurd example of IP cams being accessed from the internet by anyone.

Sure, by using CGNAT you are indeed making the network practically one-way only. There exists RFC7021 which refers to the big issues of a CGNAT network. So, by using CGNAT you sacrifice a lot of usability in the name of security. This supposed security can be replicated by the most simple possible firewall, which could be deployed in Dual Stack IPv4/IPv6 using any modern firewalling system, like nftables.

(Here is a good blogpost of RFC7021 for spanish readers: Midiendo el impacto del Carrier-Grade NAT sobre las aplicaciones en red)

By the way, Google kindly provides some statistics regarding their IPv6 traffic. These stats clearly show an exponential growth:

Others ISP operators are giving IPv6 strong precedence over IPv4, that’s the case of Verizon in USA: Verizon Static IP Changes IPv4 to Persistent Prefix IPv6.

My article seems a bit like a rant, but I couldn’t miss the oportunity to claim for native IPv6. None of the major spanish ISP have IPv6.

Categories: FLOSS Project Planets

Acquia Developer Center Blog: 254: Mumbai Memories - meet Rakesh James

Planet Drupal - Wed, 2017-03-22 13:13

My trusty microphone, camera, and I recorded a few great conversations at DrupalCon in Mumbai that have never been released until now. Today, a conversation with Rakesh James, who credits Dries for giving him a way to live and support his family with Drupal. Rakesh is an extraordinary and generous person; he's personally paid that forward, giving others in India the chance to change their lives, too, by teaching hundreds of people Drupal and giving them a shot at a career, too. He's also a top 30 contributor to Drupal 8 core.

Rakesh told me about the moment he both discovered and fell in love with Drupal. His manager gave him permission to check out Drupal for a project, "I started it with Drupal 5. I got a big error. My senior [colleague] said I could post on Drupal.org because he was sitting far away and could not debug for me. I posted the error ... After one hour somebody from the community replied that it would be better if you started with Drupal 6. That was amazing. If you post it, somebody from the [other side] of the planet replied to me, 'You should do this.' From that amazing [moment] till now, I have that feeling. All the time when you go to the community and post something, you'll be getting the right answer. In an hour's time. That is so amazing."

"I feel like when I have gotten something, I should give back to others who are struggling. If they have a little education, know how to play with the computer, I should teach them Drupal. That is the best way of doing it. I spread the word because I got something. The people are around, this magic should be with them also ... So they will have a better life. They'll have a better salary. It's a better way to do that; teach the kids in pre-university colleges. We should teach them. I volunteer my time for that. Two Saturdays a month, we go out to the colleges. Every first Saturday, we have a community meet-up; the other Saturday we go to a college and teach them Drupal."

If you have any doubts about Rakesh's sincerity in all this, watch how moved he is in the video from about 10:30 to 11:50 :-)

DrupalCon Asia Mumbai 2016 was almost exactly a year ago now. Of all the conferences I have been to, Mumbai was probably my favorite. I met an incredible, active, enthusiastic Drupal community that welcomed everyone with open arms, incredible food (!), and a LOT of selfies :-)

Subscribe to the podcast!

Subscribe to the Acquia Podcast in iTunes and rate this episode!

Subscribe via our RSS feed.

Skill Level: BeginnerIntermediateAdvanced
Categories: FLOSS Project Planets

Python Engineering at Microsoft: Interactive Windows in VS 2017

Planet Python - Wed, 2017-03-22 13:00

Last week we announced that the Python development workload is available now in Visual Studio Preview, and briefly covered some of the new improvements in Visual Studio 2017. In this post, we are going to go into more depth on the improvements for the Python Interactive Window.

These are currently available in Visual Studio Preview, and will become available in one of the next updates to the stable release. Over the lifetime of Visual Studio 2017 we will have opportunities to further improve these features, so please provide feedback and suggestions at our GitHub site.

Interactive Windows

People who have been using Visual Studio with many versions of Python installed will be used to seeing a long list of interactive windows – one for each version of Python. Selecting any one of these would let you run short snippets of code with that version and see the results immediately. However, because we only allowed one window for each, there was no way to open multiple windows for the same Python version and try different things.

In Visual Studio 2017, the main menu has been simplified to only include a single entry. Selecting this entry (or using the Alt+I keyboard shortcut) will open an interactive window with some new toolbar items:

At the right hand side, you’ll see the new “Environment” dropdown. With this field, you can select any version of Python you have installed and the interactive window will switch to it. This will reset your current state (after prompting), but will keep your history and previous output.

The button at the left hand side of the toolbar creates a new interactive window. Each window is independent from each other: they do not share any state, history or output, can use the same or different versions of Python, and may be arranged however you like. This flexibility will allow you to try two different pieces of code in the same version of Python, viewing the results side-by-side, without having to repeatedly reset everything.

Code Cells

One workflow that we see people using very successfully is what we internally call the scratchpad. In this approach, you have a Python script that contains many little code snippets that you can copy-paste from the editor into an interactive window. Typically you don’t run the script in its entirety, as the code snippets may be completely unrelated. For example, you might have a “scratchpad.py” file with a range of your favorite matplotlib or Bokeh plot commands.

Previously, we provided a command to send selected text to an interactive window (press Ctrl+E twice) to easily copy code from an editor window. This command still exists, but has been enhanced in Visual Studio 2017 in the following ways.

We’ve added Ctrl+Enter as a new keyboard shortcut for this command, which will help more people use muscle memory that they may have developed in other tools. So if you are comfortable with pressing Ctrl+E twice, you can keep doing that, or you can switch to the more convenient Ctrl+Enter shortcut.

We have also made the shortcut work when you don’t have any code selected. (This is the complicated bit, but we’ve found that it makes the most sense when you start using it.) In the normal case, pressing Ctrl+Enter without a selection will send the current line of text and move the caret to the following line. We do not try and figure out whether it is a complete statement or not, so you might send an invalid statement, though in this case we won’t try and execute it straight away. As you send more lines of code (by pressing Ctrl+Enter repeatedly), we will build up a complete statement and then execute it.

For the scratchpad workflow though, you’ll typically have a small block of code you want to send all at once. To simplify this, we’ve added support for code cells. Adding a comment starting with #%% to your code will begin a code cell, and end the previous one. Code cells can be collapsed, and using Ctrl+Enter (or Ctrl+E twice) inside a code cell will send the entire cell to the interactive and move to the next one. So when sending a block of code, you can simply click anywhere inside the cell and press Ctrl+Enter to run the whole block.

We also detect code cells starting with comments like # In[1]:, which is the format you get when exporting a Jupyter notebook as a Python file. So you can easily execute a notebook from Azure Notebooks by downloading as a Python file, opening in Visual Studio, and using Ctrl+Enter to run each cell.

Startup Scripts

As you start using interactive windows more in your everyday workflow, you will likely develop helper functions that you use regularly. While you could write these into a code cell and run them each time you restart, there is a better way.

In the Python Environments window for each environment, there are buttons to open an interactive window, to explore interactive scripts, and to enable IPython interactive mode. There is a checkbox to enable IPython interactive mode, and when it is selected all interactive windows for that environment will start with IPython mode enabled. This will allow you to use inline plots, as well as the extended IPython syntax such as name? to view help and !command for shell commands. We recommend enabling IPython interactive mode when you have installed an Anaconda distribution, as it requires extra packages.

The “Explore interactive scripts” button will open a directory in File Explorer from your documents folder. You can put any Python scripts you like into this folder and they will be run every time you start an interactive window for that environment. For example, you may make a function that opens a DataFrame in Excel, and then save it to this folder so that you can use it in your interactive window.


Thanks for installing and using Python support in Visual Studio 2017. We hope that you’ll find these interactive window enhancements useful additions to your development workflow.

As usual, we are constantly looking to improve all of our features based on what our users need and value most. To report any issues or provide suggestions, feel free to post them at our GitHub site.

Categories: FLOSS Project Planets

Michael Stapelberg: Debian stretch on the Raspberry Pi 3 (update)

Planet Debian - Wed, 2017-03-22 12:36

I previously wrote about my Debian stretch preview image for the Raspberry Pi 3.

Now, I’m publishing an updated version, containing the following changes:

  • A new version of the upstream firmware makes the Ethernet MAC address persist across reboots.
  • Updated initramfs files (without updating the kernel) are now correctly copied to the VFAT boot partition.
  • The initramfs’s file system check now works as the required fsck binaries are now available.
  • The root file system is now resized to fill the available space of the SD card on first boot.
  • SSH access is now enabled, restricted via iptables to local network source addresses only.
  • The image uses the linux-image-4.9.0-2-arm64 4.9.13-1 kernel.

A couple of issues remain, notably the lack of HDMI, WiFi and bluetooth support (see wiki:RaspberryPi3 for details. Any help with fixing these issues is very welcome!

As a preview version (i.e. unofficial, unsupported, etc.) until all the necessary bits and pieces are in place to build images in a proper place in Debian, I built and uploaded the resulting image. Find it at https://people.debian.org/~stapelberg/raspberrypi3/2017-03-22/. To install the image, insert the SD card into your computer (I’m assuming it’s available as /dev/sdb) and copy the image onto it:

$ wget https://people.debian.org/~stapelberg/raspberrypi3/2017-03-22/2017-03-22-raspberry-pi-3-stretch-PREVIEW.img $ sudo dd if=2017-03-22-raspberry-pi-3-stretch-PREVIEW.img of=/dev/sdb bs=5M

If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:

$ ssh root@rpi3 # Password is “raspberry”
Categories: FLOSS Project Planets

NumFOCUS: nteract: Building on top of Jupyter (from a rich REPL toolkit to interactive notebooks)

Planet Python - Wed, 2017-03-22 12:29
This post originally appeared on the nteract blog. Blueprint for nteract nteract builds upon the very successful foundations of Jupyter. I think of Jupyter as a brilliantly rich REPL toolkit. A typical REPL (Read-Eval-Print-Loop) is an interpreter that takes input from the user and prints results (on stdout and stderr).

Here’s the standard Python interpreter; a REPL many of us know and love. Standard Python interpreter The standard terminal’s spartan user interface, while useful, leaves something to be desired. IPython was created in 2001 to refine the interpreter, primarily by extending display hooks in Python. Iterative improvement on the interpreter was a big boon for interactive computing experiences, especially in the sciences. IPython terminal As the team behind IPython evolved, so did their ambitions to create richer consoles and notebooks. Core to this was crafting the building blocks of the protocol that were established on top of ZeroMQ, leading to the creation of the IPython notebook. It decoupled the REPL from a closed loop in one system to multiple components communicating together. IP[y]thon Notebook As IPython came to embrace more than just Python (RJuliaNode.jsScala, …), the IPython leads created a home for the language agnostic parts: Jupyter. Jupyter Notebook Classic Edition Jupyter isn’t just a notebook or a console. It’s an establishment of well-defined protocols and formats. It’s a community of people who come together to build interactive computing experiences. We share our knowledge across the sciences, academia, and industry — there’s a lot of overlap in vision, goals, and priorities.

That being said, one project alone may not meet with everyone’s specific needs and workflows. Luckily, with strong support by Jupyter’s solid foundation of protocols to communicate with the interpreters (Jupyter kernels) and document formats (e.g. .ipynb), you too can build your ideal interactive computing environment.

In pursuit of this, members of the Jupyter community created nteract, a Jupyter notebook desktop application as well as an ecosystem of JavaScript packages to support it and more. What is the platform that Jupyter provides to build rich interactive experiences? To explore this, I will describe the Jupyter protocol with a lightweight (non-compliant) version of the protocol that hopefully helps explain how this works under the hood. Also a lightweight Hello WorldWhen a user runs this code, a message is formed: We send that message and receive replies as JSON: We’ve received two types of messages so far:
  • execution status for the interpreter — busy or idle
  • a “stream” of stdout
The status tells us the interpreter is ready for more and the stream data is shown below the editor in the output area of a notebook. What happens when a longer computation runs? Sleepy time printing As multiple outputs come in, they get appended to the display area below the code editor. How are tables, plots, and other rich media shown? Yay for DataFrames! Let’s send that code over to see The power and simplicity of the protocol emerges when using the execute_result and display_data message types. They both have a data field with multiple media types for the frontend to choose how to represent. Pandas provides text/plain and text/html for tabular data text/plain text/html When the front-end receives the HTML payload, it embeds it directly in the outputs so you get a nice table: DataFrame to Table This isn’t limited to HTML and text — we can handle images and any other known transform. The primary currency for display are these bundles of media types to data. In nteract we have a few custom mime types, which are also coming soon to a Jupyter notebook near you! GeoJSON in nteract Vega / Vega-lite via Altair (https://github.com/altair-viz/altair) How do you build a notebook document? We’ve witnessed how our code gets sent across to the runtime and what we receive on the notebook side. How do we form a notebook? How do we associate messages to the cells they originated from?

We need an ID to identify where an execute_request comes from. Let’s bring in the concept of a message ID and form the cell state over time We send the execute_request as message 0001 and initialize our state Each message afterward lists the originating msg_id as parent_id 0001. Responses start flowing in, starting with message 0002 Which we can store as part of the state of our cell Here comes the plain text output in message 0003 Which we fold into an outputs structure of our cell Finally, we receive a status to inform us the kernel is no longer busy Resulting in the final state of the cell That’s just one cell though —what would an entire notebook structure look like? One way of thinking about a notebook is that it’s a rolling work log of computations. A linear list of cells. Using the same format we’ve constructed above, here’s a lightweight notebook: ​As well as the rendered version: As Jupyter messages are sent back and forth, a notebook is formed. We use message IDs to route outputs to cells. Users run code, get results, and view representations of their data: This very synchronous imperative description doesn’t give the Jupyter protocol (and ZeroMQ for that matter) enough credence. In reality, it’s a hyper reactor of asynchronous interactive feedback, enabling you to iterate quickly and explore the space of computation and visualization. These messages come in asynchronously, and there are a lot more messages available within the core protocol. I encourage you to get involved in both the nteract and Jupyter projects. We have plenty to explore and build together, whether you are interested in: Feel free to reach out on issues or the nteract slack.
Thank you to Safia AbdallaLukas GeigerPaul IvanovPeggy Rayzis, and Carol Willing for reviewing, providing feedback, and editing this post.

Thanks to Lukas GeigerPeggy Rayzis, and Paul “π” Ivanov.
Categories: FLOSS Project Planets

Reading old stuff

Planet KDE - Wed, 2017-03-22 12:18

A few months ago, Helio blogged about building KDE 1 (again) on modern systems. So recently while cleaning up some boxes of old books, I found the corresponding books — which shows that there was a time that there was a market for writing books about the Linux desktop.

Particularly the top book, “Using KDE” by Nicholas Wells, is interesting. The first page I opened it up to was a pointer to the KDE Translation teams, and information on how to contribute, how to get in touch with the translation teams, etc. You can still find the translation info online, although the location has changed since 2000.

There’s also tips and tricks on getting your .xinitrc right, and how to safely fall back to twm. I find this amusing, because I still use the same techniques when testing new Plasma 5 packages in a FreeBSD VM. It’s good that the old standalone twm is still there, but it would be much more amusing to fall back to KDE 1.1.2 if Plasma 5 doesn’t start right, I think.

Categories: FLOSS Project Planets

Chef Intermediate Training

Planet KDE - Wed, 2017-03-22 11:57

I did a day’s training at the FLOSS UK conference in Manchester on Chef. Anthony Hodson came from Chef (a company with over 200 employees) to provide this intermediate training which covered writing receipes using test driven development.  Thanks to Chef and Anthony and FLOSS UK for providing it cheap.  Here’s some notes for my own interest and anyone else who cares.

Using chef generate we started a new cookbook called http.

This cookbook contains a .kitchen.yml file.  Test Kitchen is a chef tool to run tests on chef recipes.  ‘kitchen list’ will show the machines it’s configured to run.  Default uses Virtualbox and centos/ubuntu.  Can be changed to Docker or whatever.  ‘kitchen create’ will make them. ‘kitchen converge to deploy. ‘kitchen login’ to log into v-machine. ‘kitchen verify’ run tests.  ‘kitchen test’ will destroy then setup and verify, takes a bit longer.

Write the test first.  If you’re not sure what the test should be write stub/placeholder statements for what you do know then work out the code.

ChefSpec (an RSpec language) is the in memory unit tests for receipes, it’s quicker and does finer grained tests than the Kitchen tests (which use InSpec and do black box tests on the final result).  Run with  chef exec rspec ../default-spec.rb  rspec shows a * for a stub.

Beware if a test passes first time, it might be a false positive.

ohai is a standalone or chef client tool which detects the node attributes and passes to the chef client.  We didn’t get onto this as it was for a follow on day.

Pry is a Ruby debugger.  It’s a Gem and part of chefdk.

To debug recipes use pry in the receipe, drops you into a debug prompt for checking the values are what you think they are.

I still find deploying chef a nightmare, it won’t install in the normal way on my preferred Scaleway server because they’re ARM, by default it needs a Chef server but you can just use chef-client with –local-mode and then there’s chef solo, chef zero and knife solo which all do things that I haven’t quite got my head round.  All interesting to learn anyway.


Categories: FLOSS Project Planets

Dirk Eddelbuettel: Suggests != Depends

Planet Debian - Wed, 2017-03-22 11:16

A number of packages on CRAN use Suggests: casually.

They list other packages as "not required" in Suggests: -- as opposed to absolutely required via Imports: or the older Depends: -- yet do not test for their use in either examples or, more commonly, unit tests.

So e.g. the unit tests are bound to fail because, well, Suggests != Depends.

This has been accomodated for many years by all parties involved by treating Suggests as a Depends and installing unconditionally. As I understand it, CRAN appears to flip a switch to automatically install all Suggests from major repositories glossing over what I consider to be a packaging shortcoming. (As an aside, treatment of Additonal_repositories: is indeed optional; Brooke Anderson and I have a fine paper under review on this)

I spend a fair amount of time with reverse dependency ("revdep") checks of packages I maintain, and I will no longer accomodate these packages.

These revdep checks take long enough as it is, so I will now blacklist these packages that are guaranteed to fail when their "optional" dependencies are not present.

Writing R Extensions says in Section 1.1.3

All packages that are needed10 to successfully run R CMD check on the package must be listed in one of ‘Depends’ or ‘Suggests’ or ‘Imports’. Packages used to run examples or tests conditionally (e.g. via if(require(pkgname))) should be listed in ‘Suggests’ or ‘Enhances’. (This allows checkers to ensure that all the packages needed for a complete check are installed.)

In particular, packages providing “only” data for examples or vignettes should be listed in ‘Suggests’ rather than ‘Depends’ in order to make lean installations possible.


It used to be common practice to use require calls for packages listed in ‘Suggests’ in functions which used their functionality, but nowadays it is better to access such functionality via :: calls.

and continues in Section

Note that someone wanting to run the examples/tests/vignettes may not have a suggested package available (and it may not even be possible to install it for that platform). The recommendation used to be to make their use conditional via if(require("pkgname"))): this is fine if that conditioning is done in examples/tests/vignettes.

I will now exercise my option to use 'lean installations' as discussed here. If you want your package included in tests I run, please make sure it tests successfully when only its required packages are present.

Categories: FLOSS Project Planets

Dataquest: Turbocharge Your Data Acquisition using the data.world Python Library

Planet Python - Wed, 2017-03-22 11:00

When working with data, a key part of your workflow is finding and importing data sets. Being able to quickly locate data, understand it and combine it with other sources can be difficult.

One tool to help with this is data.world, where you can search for, copy, analyze, and download data sets. In addition, you can upload your data to data.world and use it to collaborate with others.

In this tutorial, we’re going to show you how to use data.world’s Python library to easily work with data from your python scripts or Jupyter notebooks. You’ll need to create a free data.world account to view the data set and follow along.

The data.world python library allows you to bring data that’s stored in a data.world data set straight into your workflow, without having to first download the data locally and transform it into a format you require.

Because data sets in data.world are stored in the format that the user originally uploaded them in, you often find great data sets that exist in a less than idea, format, such as multiple sheets of an Excel workbook, where...

Categories: FLOSS Project Planets

DataCamp: New Python Course: Network Analysis

Planet Python - Wed, 2017-03-22 09:14

Hi Pythonistas! Today we're launching Network Analysis in Python by Eric Ma!

From online social networks such as Facebook and Twitter to transportation networks such as bike sharing systems, networks are everywhere, and knowing how to analyze this type of data will open up a new world of possibilities for you as a Data Scientist. This course will equip you with the skills to analyze, visualize, and make sense of networks. You'll apply the concepts you learn to real-world network data using the powerful NetworkX library. With the knowledge gained in this course, you'll develop your network thinking skills and be able to start looking at your data with a fresh perspective!

Start for free

Python: Network Analysis features interactive exercises that combine high-quality video, in-browser coding, and gamification for an engaging learning experience that will make you a master network analysis in python!

What you'll learn:

In the first chapter, you'll be introduced to fundamental concepts in network analytics while becoming acquainted with a real-world Twitter network dataset that you will explore throughout the course. In addition, you'll learn about NetworkX, a library that allows you to manipulate, analyze, and model graph data. You'll learn about different types of graphs as well as how to rationally visualize them. Start first chapter for free.

In chapter 2, you'll learn about ways of identifying nodes that are important in a network. In doing so, you'll be introduced to more advanced concepts in network analysis as well as learn the basics of path-finding algorithms. The chapter concludes with a deep dive into the Twitter network dataset which will reinforce the concepts you've learned, such as degree centrality and betweenness centrality.

Chapter 3 is all about finding interesting structures within network data. You'll learn about essential concepts such as cliques, communities, and subgraphs, which will leverage all of the skills you acquired in Chapter 2. By the end of this chapter, you'll be ready to apply the concepts you've learned to a real-world case study.

In the final chapter of the course, you'll consolidate everything you've learned by diving into an in-depth case study of GitHub collaborator network data. This is a great example of real-world social network data, and your newly acquired skills will be fully tested. By the end of this chapter, you'll have developed your very own recommendation system which suggests GitHub users who should collaborate together. Enjoy!

Start course for free

Categories: FLOSS Project Planets
Syndicate content