FLOSS Project Planets

Tennessee Leeuwenburg: A Twitter Memebot in Word2Vec

Planet Python - Mon, 2015-06-15 11:53
I wanted to explore some ideas with Word2Vec to see how it could potentially be applied in practise. I thought that I would take a run at a Twitter bot that would try to do something semantically interesting and create new content. New ideas, from code.

Here's an example. Word2Vec is all about finding some kind of underlying representation of the semantics of words, and allowing some kind of traversal of that semantic space in a reliable fashion. It's about other things too, but what gets really excited is the fact that it's an approach which seems to actually approach the way that we humans tend to form word relationships.

Let's just say I was partially successful. The meme I've chosen above is one of the better results from the work, but there were many somewhat-interesting outputs. I refrained from making the twitter bot autonomous, as it had an unfortunate tendency to lock in on the most controversial tweets in my timeline, then make some hilarious but often unfortunate inferences from them, then meme them. Yeah, I'm not flicking that particular switch, thanks very much!

The libraries in use for this tutorial can be found at:

  • https://github.com/danieldiekmeier/memegenerator
  • https://github.com/danielfrg/word2vec
  • https://github.com/tweepy/tweepy
I recommend tweepy over other twitter API libraries, at least for Python 3.4, as it was the only one which worked for me first try. I didn't get round to the others again for a second try, because working solution.
You'll need to go get some twitter API keys. I don't remember all the steps for this, I just kind of did it on instinct. There's a Stack Overflow question on the topic if that helps. http://stackoverflow.com/questions/1808855/getting-new-twitter-api-consumer-and-secret-keys but that's not what I used. Good luck :)
This particular Twitter bot will select a random tweet from your timeline, then comment on it in the form of a meme. The relevance of those tweets is a bit hit-and-miss to be honest. This could probably be solved by using topic-modelling rather than random selection to find the most relevant keywords from the tweet.public_tweets = api.home_timeline() will fetch the most recent tweets from the current user's timeline. The code then chooses a random tweet, and focuses on words that are longer than 3 characters (a proxy for 'interesting' or 'relevant). From this, we extract four words (if available). The goal is to produce a meme of the form "A is to B as C is to D". A, B and C are random words chosen from the tweet. D is a word found using word2vec. The fourth word is used to choose the image background by doing a flickr search.
indexes, metrics = model.analogy(pos=[ws[0], ws[1]], neg=[ws[2]], n=10)
ws.append(model.vocab[indexes][0])The first line there is getting a list of candidate words for the end of our analogy. The second line is just picking the first one.
For example, a human might approach this as follows. Suppose the tweet is:
"I really love unit testing. It makes my job so much easier when doing deployments."
The word selection might result in "Testing, Easier, Deployments, Job". The goal would be to come up with a word for "Testing is to easier as Deployments is to X" (over an image of a job). I might come up with the word "automatic". Who knows -- it's kind of hard to relate all of those things. 
Here's an image showing another dubious set of relationships.

There' some sense in it -- it's certainly seems that breaking things can be trivial, and that flicking a switch is easy, and that questioning is a bit like testing. The background evokes both randomness and goal-seeking. However, getting any more precise than that is drawing a long bow, and a lot of those relationships came up pretty randomly. 
I could imagine this approach being used to create suggested memes to a human reviewer, using a supervised system approach. However, it's not really ready for autonomous use, being inadequate in both semantic meaning and sensitivity to content. However, I do think it shows that it's pretty easy to take and use the technologies for basic use.
There are a bunch of potential improvements I can think of which should result in better output. Focusing the word search towards the topic of the tweet is one. Selecting words for the analogy which are reasonably closely related would be another, and quite doable using the word2vec approach.
Understanding every step of this system requires a more involved explanation of what's going on, so I think the next few posts might be targetted at the intermediate steps and how they were arrived at, plus a walkthrough of each part of the code (which will be made available at that point).
Until next time, happy coding!

Categories: FLOSS Project Planets

Ioannis Canellos: Injecting Kubernetes Services in CDI managed beans using Fabric8

Planet Apache - Mon, 2015-06-15 11:16

The thing I love the most in Kubernetes is the way services are discovered. Why?

Mostly because the user code doesn't have to deal with registering, looking up services and also because there are no networking surprises (if you've ever tried a registry based approach, you'll know what I am talking about).

This post is going to cover how you can use Fabric8 in order to inject Kubernetes services in Java using CDI.

Kubernetes Services

Covering in-depth Kubernetes Services is beyond the scope of this post, but I'll try to give a very brief overview of them.

In Kubernetes,  applications are packaged as Docker containers.  Usually, it's a nice idea to split the application into individual pieces, so you will have multiple Docker containers that most probably need communicate with each other. Some containers may be collocated together by placing them in the same Pod, while others may be remote and need a way to talk to each other. This is where Services get in the picture.

A container may bind to one or more ports providing one or more "services" to other containers. For example:
  • A database server.
  • A message broker.
  • A rest service.

The question is "How other containers know how to access those services?"

So, Kubernetes allows you to "label" each Pod and use those labels to "select" Pods that provide a logical service. Those labels are simple key, value pairs.

Here's an example of how we can "label" a pod by specifying a label with key name and value mysql.
And here's an example of how we can define a Service that exposes the mysql port. The service selector is using the key/value pair we specified above in order to define which are the pod(s) that provide the service.

The Service information passed to each container as environment variables by Kubernetes. For each container that gets created Kubernetes will make sure that the appropriate environment variables will be passed for ALL services visible to the container.

For the mysql service of the example above, the environment variables will be:

Fabric8 provides a CDI extension which can be used in order to simplify development of Kubernetes apps, by providing injection of Kubernetes resources.

Getting started with the Fabric8 CDI extension
To use the cdi extension the first step is to add the dependency to the project. Next step is to decide which service you want to inject to what field and then add a @ServiceName annotation to it.
In the example above we have a class that needs a JDBC connection to a mysql database that is available via Kubernetes Services.

The injected serivceUrl will have the form: [tcp|udp]://[host]:[port]. Which is a perfectly fine url, but its not a proper jdbc url. So we need a utility to convert that. This is the purpose of the toJdbcUrl.

Even though its possible to specify the protocol when defining the service, one is only able to specify core transportation protocols such as TCP or UDP and not something like http, jdbc etc.

The @Protocol annotation

Having to find and replace the "tcp" or "udp" values with the application protocol, is smelly and it gets old really fast. To remove that boilerplate Fabric8 provides the @Protocol annotation. This annotation allows you to select that application protocol that you want in your injected service url. In the previous example that is "jdbc:mysql". So the code could look like:

Undoubtably, this is much cleaner. Still it doesn't include information about the actual database or any parameters that are usually passed as part of the JDBC Url, so there is room for improvement here.

One would expect that in the same spirit a @Path or a @Parameter annotations would be available, but both of these are things that belong to configuration data and are not a good fit for hardcoding into code. Moreover, the CDI extension of Fabric8 doesn't aspire to become a URL transformation framework. So, instead it takes things up a notch by allowing you to directly instantiate the client for accessing any given service and inject it into the source.

Creating clients for Services using the @Factory annotation

In the previous example we saw how we could obtain the url for a service and create a JDBC connection with it. Any project that wants a JDBC connection can copy that snippet and it will work great, as long as the user remembers that he needs to set the actual database name.

Wouldn't it be great, if instead of copying and pasting that snippet one could component-ise it and reuse it? Here's where the factory annotation kicks in. You can annotate with @Factory any method that accept as an argument a service url and returns an object created using the URL (e.g. a client to a service).  So for the previous example we could have a MysqlConnectionFactory:
Then instead of injecting the URL one could directly inject the connection, as shown below.
What happens here?

When the CDI application starts, the Fabric8 extension will receive events about all annotated methods. It will track all available factories, so for for any non-String injection point annotated with @ServiceName, it will create a Producer that under the hood uses the matching @Factory.

In the example above first the MysqlConnectionFactory will get registered, and when Connection instance with the @ServiceName qualifier gets detected a Producer that delegates to the MysqlConnectionFactory will be created (all qualifiers will be respected).

This is awesome, but it is also simplistic too. Why?
Because rarely a such a factory only requires a url to the service. In most cases other configuration parameters are required, like:

  • Authentication information
  • Connection timeouts
  • more ....

Using @Factory with @Configuration

In the next section we are going to see factories that use configuration data. I am going to use the mysql jdbc example and add support for specifying configurable credentials. But before that I am going to ask a rhetorical question?

"How, can you configure a containerised application?" 

The shortest possible answer is "Using Environment Variables".

So in this example I'll assume that the credentials are passed to the container that needs to access mysql using the following environment variables:
Now we need to see how our @Factory can use those.

I've you wanted to use environment variables inside CDI in the past, chances are that you've used Apache DeltaSpike. This project among other provides the @ConfigProperty annotation, which allows you to inject an environment variable into a CDI bean (it does more than that actually).

This bean could be combined with the @Factory method, so that we can pass configuration to the factory itself.

But what if we had multiple database servers, configured with a different set of credentials, or multiple databases? In this case we could use the service name as a prefix, and let Fabric8 figure out which environment variables it should look up for each @Configuration instance.

Now, we have a reusable component that can be used with any mysql database running inside kubernetes and is fully configurable.

There are additional features in the Fabric8 CDI extension, but since this post is already too long, they will be covered in future posts.

Stay tuned.

Categories: FLOSS Project Planets

Acquia: Build Your Drupal 8 Team: The Forrester Digital Maturity Model

Planet Drupal - Mon, 2015-06-15 10:46

In business, technology is a means to an end, and using it effectively to achieve that end requires planning and strategy.

The Capability Maturity Model, designed for assessing the formality of a software development process, was initially described back in 1989. The Forrester Digital Maturity Model is one of several models that update the CMM for modern software development in the age of e-commerce and mobile development, when digital capability isn't an add-on but rather is fundamental to business success. The model emphasizes communicating strategy while putting management and control processes into place.

Organizations that are further along within the maturity model are more likely to repeatedly achieve successful completion of their projects.

Let's take a look at the stages of this model, as the final post in our Build Your Drupal 8 Team series.

Here are the four stages:

Stage 1 is ad hoc development. When companies begin e-commerce development, there is no defined strategy, and the companies' products are not integrated with other systems. Most products are released in isolation and managed independently.

Stage 2 organizations follow a defined process model. The company is still reactive and managing projects individually, but the desired digital strategy has been identified.

Stage 3 is when the digital strategy and implementation is managed. An overall environment supportive for web and e-commerce development exists, and products are created within the context of that environment.

In Stage 4, the digital business needs are integrated. Products aren't defined in isolation, but rather are part of an overall strategic approach to online business. The company has a process for planning and developing the products and is focused on both deployment and ongoing support.

The final capability level, Stage 5, is when digital development is optimized. Cross-channel products are developed and do more than integrate: they are optimized for performance. The company is able to focus on optimizing the development team as well, with continuous improvement and agile development providing a competitive advantage.

Understanding where your company currently finds itself on the maturity scale can help you plan how you will integrate and adapt the new functionality of Drupal 8 into your development organization.

If you are an ad hoc development shop, adopting Drupal 8 and achieving its benefits may be very challenging for you. You may need to work with your team to move up at least one maturity level before you try to bring in the new technology.

In contrast, if your team is at stage 5, you can work on understanding how Drupal 8 will benefit not just your specific upcoming project, but also everything else that is going on within your organization.


  • A comprehensive SlideShare presentation on Digital Maturity Models.
  • A blog post by Forrester's Martin Gill that mentions the Digital Maturity Model in the context of digital acceleration.
Tags:  acquia drupal planet
Categories: FLOSS Project Planets

Steve Loughran: Why is so much of my life wasted waiting for test runs to complete?

Planet Apache - Mon, 2015-06-15 09:46
I've spent the weekend enduring the pain of kerberos-related functional test failures, test runs that take time to finish, especially as its token expiry between deployed services which is the Source of Madness (copyright (c) 1988 MIT).

Anyone who follows me on Strava can infer when those runs take place as if its a long one, I've nipped down to the road bike on the turbo trainer and done a bit of exercise while waiting for the results.

Which is all well and good except for one point: why do I have to wait so long?

While a test is running, the different services in the cluster are all generating timestamped events, "log messages" as they usually known,  The code the test runner itself is also generating a stream of events, from any client-side code and wrapping JUnit/xUnit runners, again, tuples of (timestamp, thread, module, level, text) + implicitly (process, host). And of course there's the actual outcome of each test.

Why do I have to wait until the entire test run is completed for those results to appear?

There's no fundamental reason for that to be the case. It's just the way that the functional tests have evolved under the unit test runners, test runners designed to run short lived unit tests of little classes, runs where stdout and stderr were captured without any expectation of structured format. When <junit> completed individual test cases, it'd save the XML DOM build in memory to an XML file under build/tests. After Junit itself completed, the build.xml would have a <junitreport> task to map XML -> HTML in a wondrous piece of XSLT. 

Maven surefire does exactly the same thing, except it's build reporter doesn't make it easy to stream the results to both XML files and to the console at the same time.

The CI tooling: Cruise Control and its successors, of which Jenkins is the near-universal standard took those same XML reports and now generate their own statistics, and again wait for the reports to be generated at the end of the test run.

That means those of us who are waiting for a test to finish have a limited set of choices
  1. Tweak the logging and output to the console, stare at it waiting to see stack traces to go by
  2. Run a single failing test repeatedly until you fix it, again, staring at the output. In doing so you neglect the rest of the code until at the end of the day you are left with the choices of (a) run the hour long test of everything to make sure there are no regressions and (b) commit and push and expect a remote Jenkins to find the problem, at which point you may have broken a test and either need to get those results again & fix them, or rely on the goodwill of a colleage (special callout, Ted Yu, the person who usually ends up fixing SLIDER-1 issues)
Usually I drift into the single-test mode, but first you need to identify the problem. And even then, if the test takes a few minutes, each iteration hurts. And there's the hassle of collecting the logs, correlating events across machines and services to try and understand what's going on. If you want more detail, its over to http:{service host:port}/logLevel and tuning up the logs to capture more events on the next iteration, and so you are off again.

A hard-to-identify problem becomes a "very slow to identify problem", or productivity killer.

Sitting waiting for tests is a waste of time for software engineers.

What to do?

There's parallelisation. Apparently there's some parallelised test runner that the Cloudera team has which we could perhaps pick up and make reusable. That would be great, but you are still waiting for the end of the test runs for the results, unless you are going to ssh into the hosts and play tail -f against log files, or grep for specific event texts.

What would be just as critical is: real time reporting of test results.

I've discussed before how we need to radically improve tests and test runners.

What we also have to recognise is that the test reporting infrastructure equally dates from the era of unit tests taking milliseconds, full test suites and XSL transformations of the results taking 10s of seconds at most.

The world has moved on from unit tests.

What do I want now? As well as the streaming out of those events in structured form directly to the some aggregrator, I want that test runner to be immediately publishing the aggregate event stream and test results to some viewer better than four consoles with tail -f streaming text files (or worse, XML reports). I want HTML pages as they come in, with my test report initially showing all tests enumerated, then filling up as tests run and fail. I'd like the runner to known (history, user input?) which tests were failing, and so run them first. If I checked in a patch to a specific test class, that'll be the one I want to run next, followed by everything else in the same module (assuming locality of test coverage).

Once I've got this, the CI tooling I'm going to run will change. It won't be a central machine or pool of them, it'll be a VM hosted locally or some cloud infrastructure. Probably the latter, so it won't be fighting for RAM and CPU time with the IDE.

Whenever I commit and push a patch to my private branch, the tests should run.

It's my own personal CI instance, it gets to run my tests, and I get to have a browser window open keeping track of the status while I get on with doing other things.

We can do this: its just the test runner reporting being switched from batch to streaming, with the HTML views adapting.

If we're building the largest distributed computing systems on the planet, we can't say that this is beyond us.

(Photo: Nibali descending from the Chartreuse Massif into Grenoble; Richie Porte and others just behind him doomed to failure on the climb to Chamrousse, TdF 2014 stage 13)
Categories: FLOSS Project Planets

Deutsche OpenStack Tage 2015

Planet KDE - Mon, 2015-06-15 09:13

Next week, from 23.-24.06.2015, the "German OpenStack Days" (Deutsche OpenStack Tage) take place in Frankfurt. I will give a talk about "Ceph in a security critical OpenStack Cloud" on Wednesday. 
The presentations are conducted in German, so it's mainly interesting for German people. There are several OpenStack and also Ceph related talks on the schedule, including a work shop on Ceph. As far as I know there are still tickets available for the conference.

Categories: FLOSS Project Planets

Reuven Lerner: Free Webinar on June 23rd: Introduction to Regular Expressions

Planet Python - Mon, 2015-06-15 08:45

If you’re a programmer, then you have likely heard about regular expressions (“regexps”) before. However, it’s also likely that you have tried to learn them, and have found them to be completely confusing. That’s not unusual; while regular expressions provide us with a powerful tool for analyzing text, their terse, dense, and cryptic syntax can make the effort not seem worthwhile.

On June 23rd, I’m going to be offering a one-hour free Webinar introducing regular expressions, showing how they can make your code more powerful and expressive.

While I’ll mostly be using Python, I’ll also show some other languages and platforms (e.g., Ruby, JavaScript, and the Unix “grep” command).

My demo and discussion will be about an hour long, and will be followed by ample time for Q&A.  My previous Webinars have been lots of fun; I hope that you’ll join in!  You can get (free) tickets at EventBrite.

And hey, if you’re an independent consultant, you can get a double dose of me on that same day; we Freelancers Show panelists will be doing our monthly Q&A just beforehand.  Come and get your questions about consulting answered by our panel of experts!

I look forward to seeing you at one or both of these events!  If you have any questions, you can e-mail me or contact me on Twitter as @reuvenmlerner.

The post Free Webinar on June 23rd: Introduction to Regular Expressions appeared first on Lerner Consulting Blog.

Categories: FLOSS Project Planets

Sebastien Goasguen: Introducing Kmachine, a Docker machine fork for Kubernetes.

Planet Apache - Mon, 2015-06-15 08:44

Docker machine is a great tool to easily start a Docker host on most public Cloud providers out there. Very handy as a replacement to Vagrant if all you want is a Docker host in the Cloud.

It automatically installs the Docker daemon and sets up the TLS authentication so that you can communicate with it using your local Docker client. It also has some early features to start a Swarm cluster (i.e Multiple Docker hosts).

Since I have been playing with Kubernetes lately, and that there is a single node install available, all based on Docker images...

I thought, let's hack Docker machine a bit so that in addition to installing the docker daemon it also pulls a few images and starts a few containers on boot.

The result is kmachine. The usage is exactly the same as Docker machine. It goes something like this on exoscale (did not take time to open 6443 on all providers...PR :)):

$ kmachine create -d exoscale foobar
$ kmachine env foobar
kubectl config set-cluster kmachine --server= --insecure-skip-tls-verify=true
kubectl config set-credentials kuser --token=abcdefghijkl
kubectl config set-context kmachine --user=kuser --cluster=kmachine
kubectl config use-context kmachine
export DOCKER_HOST="tcp://"
export DOCKER_CERT_PATH="/Users/sebastiengoasguen/.docker/machine/machines/foobar5"
export DOCKER_MACHINE_NAME="foobar5"
# Run this command to configure your shell:
# eval "$(kmachine_darwin-amd64 env foobar5)"

You see that I used kubectl, the Kubernetes client to automatically setup the endpoint created by machine. The only gotcha right now is that I hard coded the token...Easily fixed by a friendly PR. We could also setup proper certificates and TLS authentication. But I opted for the easy route for now. If you setup your env. You will have access to Kubernetes, and Docker of course, the original docker-machine functionality is not broken.

$ eval "$(kmachine env foobar)"
$ kubectl get pods
kubernetes- controller-manager gcr.io/google_containers/hyperkube:v0.17.0
apiserver gcr.io/google_containers/hyperkube:v0.17.0
scheduler gcr.io/google_containers/hyperkube:v0.17.0
$ kubectl get nodes
NAME LABELS STATUS Schedulable <none> Ready

Since all Kubernetes components are started as containers, you will see all of them running from the start. etcd, the kubelet, the controller, proxy etc.

$ docker ps
7e5d356d31d7 gcr.io/google_containers/hyperkube:v0.17.0 "/hyperkube controll
9cc05adf2b27 gcr.io/google_containers/hyperkube:v0.17.0 "/hyperkube schedule
7a0e490a44e1 gcr.io/google_containers/hyperkube:v0.17.0 "/hyperkube apiserve
6d2d743172c6 gcr.io/google_containers/pause:0.8.0 "/pause"
7950a0d14608 gcr.io/google_containers/hyperkube:v0.17.0 "/hyperkube proxy --
55fc22c508a9 gcr.io/google_containers/hyperkube:v0.17.0 "/hyperkube kubelet
c67496a47bf3 kubernetes/etcd: "/usr/local/bin/etcd

Have fun ! I think it is very handy to get started with Kubernetes and still have the Docker machine setup working. You get the benefit of both, easy provisioning of a Docker host in the Cloud and a fully working Kubernetes setup to experience with. If we could couple it with Weave of Flannel, we could setup a full Kubernetes cluster in the Cloud, just like Swarm.

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Tarek Ziade

Planet Python - Mon, 2015-06-15 08:30

This week we welcome Tarek Ziadé (@tarek_ziade) as our PyDev of the Week! He is a contributor to a lot of open source projects, which you can see if you visit his Github profile. I have personally enjoyed some of his blog articles on Python. Let’s spend some time getting to know Tarek!

Can you tell us a little about yourself (hobbies, education, etc):

I am a French programmer, living in Burgundy. I went to a French Institute of Technology (IUT) in Computer Software and started to work shortly after in the late 90s. I wrote several books about Python – the initial goal for me was to deeply understand Python and its whole ecosystem and help others in that process. I am also a terrible Trumpet player and an obsessed runner.

I am going to run the Berlin Marathon next september, and I am trying to collect money for Doctors Without Borders for the event. This is not a way for me to get a charity bib as I already have secured my bib, but an opportunity to raise funds for that important organization

To try to give people more incentive to donate, I am doing Charity code reviews: you give to my fundraising in exchange of some of my time reviewing you code details: http://ziade.org/2015/01/27/charity-python-code-review/

Why did you start using Python?

In one of the first company where I worked, we used Borland Delphi and I discovered Open Source through some VCL components. I was amazed by the idea of building stuff in the open and interacting with a community. Then, I found out about Zope & the CMF, and ended up working in that field for Nuxeo, the creators of “CPS” , a competitor of Plone. Since then, I have never really left the Python community, even if I am not involved like I use to, after being burnt out by the whole packaging effort.

What other programming languages do you know and which is your favorite?

I mainly use Python these days – but did some work with a bunch of other languages that would be pointless to enumerate here. Right now I am poking a bit at Lua in an Nginx context and I really enjoy building things with this language.

Frankly, that combination is amazing to build any web service that does not require much besides calling other servers like Redis, MysQL and returning Json.

I am also looking forward to start to use Rust. I poked at Go a little bit but I did not really get the buzz many folks in the Python community seem to be getting on it.

What projects are you working on now?

I work at Mozilla where I lead a small team that focuses on “Cloud Storage” – It’s a vague, buzz-compliant term, but basically, we’re trying to converge all storage needs we have across our organization in a small & standardized set of APIs. I have the chance to work with a team of world-class engineers and I feel stupid all the time, which is good.

We wrote and maintain things like : the Firefox Sync server, the Hello server and many more services and tools.

Which Python libraries are your favorite (core or 3rd party)?

I am pretty excited about tulip/asyncio, even if we did not release anything with it yet. When that project started I was a huge gevent/greenlet fan and I did not understand why they were building a twisted-like thing. I was pretty sure Guido was wrong on that one. Then, after a lot of work in some async apps, including big node.js projects, I slowly realized that Guido nailed it again

But hey, I am not Dutch so it’s okay.

Is there anything else you’d like to say?

I love the Python community but I miss the days where we had smaller Pycons – this conference got too big for me. What’s really nice though is that there are more and more small and mid-size Python events all around the globe.

Categories: FLOSS Project Planets

Petter Reinholdtsen: Graphing the Norwegian company ownership structure

Planet Debian - Mon, 2015-06-15 08:00

It is a bit work to figure out the ownership structure of companies in Norway. The information is publicly available, but one need to recursively look up ownership for all owners to figure out the complete ownership graph of a given set of companies. To save me the work in the future, I wrote a script to do this automatically, outputting the ownership structure using the Graphviz/dotty format. The data source is web scraping from Proff, because I failed to find a useful source directly from the official keepers of the ownership data, Brønnøysundsregistrene.

To get an ownership graph for a set of companies, fetch the code from git and run it using the organisation number. I'm using the Norwegian newspaper Dagbladet as an example here, as its ownership structure is very simple:

% time ./bin/eierskap-dotty 958033540 > dagbladet.dot real 0m2.841s user 0m0.184s sys 0m0.036s %

The script accept several organisation numbers on the command line, allowing a cluster of companies to be graphed in the same image. The resulting dot file for the example above look like this. The edges are labeled with the ownership percentage, and the nodes uses the organisation number as their name and the name as the label:

digraph ownership { rankdir = LR; "Aller Holding A/s" -> "910119877" [label="100%"] "910119877" -> "998689015" [label="100%"] "998689015" -> "958033540" [label="99%"] "974530600" -> "958033540" [label="1%"] "958033540" [label="AS DAGBLADET"] "998689015" [label="Berner Media Holding AS"] "974530600" [label="Dagbladets Stiftelse"] "910119877" [label="Aller Media AS"] }

To view the ownership graph, run "dotty dagbladet.dot" or convert it to a PNG using "dot -T png dagbladet.dot > dagbladet.png". The result can be seen below:

Note that I suspect the "Aller Holding A/S" entry to be incorrect data in the official ownership register, as that name is not registered in the official company register for Norway. The ownership register is sensitive to typos and there seem to be no strict checking of the ownership links.

Let me know if you improve the script or find better data sources. The code is licensed according to GPL 2 or newer.

Update 2015-06-15: Since the initial post I've been told that "Aller Holding A/S" is a Danish company, which explain why it did not have a Norwegian organisation number. I've also been told that there is a web services API available from Brønnøysundsregistrene, for those willing to accept the terms or pay the price.

Categories: FLOSS Project Planets

Annertech: Web Development on Fire? Smoke testing a Drupal Website

Planet Drupal - Mon, 2015-06-15 06:57
Web Development on Fire? Smoke testing a Drupal Website

Documenting code 10 years ago was always something that I wanted to do, but, let's face it: clients didn't give a damn, so unless you did it for free, it rarely happened. And I felt very sorry for the developer that had to fix any bugs without documentation (yes, even my code contains bugs from time to time!).

Categories: FLOSS Project Planets

Ruslan Spivak: Let’s Build A Simple Interpreter. Part 1.

Planet Python - Mon, 2015-06-15 06:00

“If you don’t know how compilers work, then you don’t know how computers work. If you’re not 100% sure whether you know how compilers work, then you don’t know how they work.” — Steve Yegge

There you have it. Think about it. It doesn’t really matter whether you’re a newbie or a seasoned software developer: if you don’t know how compilers and interpreters work, then you don’t know how computers work. It’s that simple.

So, do you know how compilers and interpreters work? And I mean, are you 100% sure that you know how they work? If you don’t.

Or if you don’t and you’re really agitated about it.

Do not worry. If you stick around and work through the series and build an interpreter and a compiler with me you will know how they work in the end. And you will become a confident happy camper too. At least I hope so.

Why would you study interpreters and compilers? I will give you three reasons.

  1. To write an interpreter or a compiler you have to have a lot of technical skills that you need to use together. Writing an interpreter or a compiler will help you improve those skills and become a better software developer. As well, the skills you will learn are useful in writing any software, not just interpreters or compilers.
  2. You really want to know how computers work. Often interpreters and compilers look like magic. And you shouldn’t be comfortable with that magic. You want to demystify the process of building an interpreter and a compiler, understand how they work, and get in control of things.
  3. You want to create your own programming language or domain specific language. If you create one, you will also need to create either an interpreter or a compiler for it. Recently, there has been a resurgence of interest in new programming languages. And you can see a new programming language pop up almost every day: Elixir, Go, Rust just to name a few.

Okay, but what are interpreters and compilers?

The goal of an interpreter or a compiler is to translate a source program in some high-level language into some other form. Pretty vague, isn’t it? Just bear with me, later in the series you will learn exactly what the source program is translated into.

At this point you may also wonder what the difference is between an interpreter and a compiler. For the purpose of this series, let’s agree that if a translator translates a source program into machine language, it is a compiler. If a translator processes and executes the source program without translating it into machine language first, it is an interpreter. Visually it looks something like this:

I hope that by now you’re convinced that you really want to study and build an interpreter and a compiler. What can you expect from this series on interpreters?

Here is the deal. You and I are going to create a simple interpreter for a large subset of Pascal language. At the end of this series you will have a working Pascal interpreter and a source-level debugger like Python’s pdb.

You might ask, why Pascal? For one thing, it’s not a made-up language that I came up with just for this series: it’s a real programming language that has many important language constructs. And some old, but useful, CS books use Pascal programming language in their examples (I understand that that’s not a particularly compelling reason to choose a language to build an interpreter for, but I thought it would be nice for a change to learn a non-mainstream language :)

Here is an example of a factorial function in Pascal that you will be able to interpret with your own interpreter and debug with the interactive source-level debugger that you will create along the way:

program factorial; function factorial(n: integer): longint; begin if n = 0 then factorial := 1 else factorial := n * factorial(n - 1); end; var n: integer; begin for n := 0 to 16 do writeln(n, '! = ', factorial(n)); end.

The implementation language of the Pascal interpreter will be Python, but you can use any language you want because the ideas presented don’t depend on any particular implementation language. Okay, let’s get down to business. Ready, set, go!

You will start your first foray into interpreters and compilers by writing a simple interpreter of arithmetic expressions, also known as a calculator. Today the goal is pretty minimalistic: to make your calculator handle the addition of two single digit integers like 3+5. Here is the source code for your calculator, sorry, interpreter:

# Token types # # EOF (end-of-file) token is used to indicate that # there is no more input left for lexical analysis INTEGER, PLUS, EOF = 'INTEGER', 'PLUS', 'EOF' class Token(object): def __init__(self, type, value): # token type: INTEGER, PLUS, or EOF self.type = type # token value: 0, 1, 2. 3, 4, 5, 6, 7, 8, 9, '+', or None self.value = value def __str__(self): """String representation of the class instance. Examples: Token(INTEGER, 3) Token(PLUS '+') """ return 'Token({type}, {value})'.format( type=self.type, value=repr(self.value) ) def __repr__(self): return self.__str__() class Interpreter(object): def __init__(self, text): # client string input, e.g. "3+5" self.text = text # self.pos is an index into self.text self.pos = 0 # current token instance self.current_token = None def error(self): raise Exception('Error parsing input') def get_next_token(self): """Lexical analyzer (also known as scanner or tokenizer) This method is responsible for breaking a sentence apart into tokens. One token at a time. """ text = self.text # is self.pos index past the end of the self.text ? # if so, then return EOF token because there is no more # input left to convert into tokens if self.pos > len(text) - 1: return Token(EOF, None) # get a character at the position self.pos and decide # what token to create based on the single character current_char = text[self.pos] # if the character is a digit then convert it to # integer, create an INTEGER token, increment self.pos # index to point to the next character after the digit, # and return the INTEGER token if current_char.isdigit(): token = Token(INTEGER, int(current_char)) self.pos += 1 return token if current_char == '+': token = Token(PLUS, current_char) self.pos += 1 return token self.error() def eat(self, token_type): # compare the current token type with the passed token # type and if they match then "eat" the current token # and assign the next token to the self.current_token, # otherwise raise an exception. if self.current_token.type == token_type: self.current_token = self.get_next_token() else: self.error() def expr(self): """expr -> INTEGER PLUS INTEGER""" # set current token to the first token taken from the input self.current_token = self.get_next_token() # we expect the current token to be a single-digit integer left = self.current_token self.eat(INTEGER) # we expect the current token to be a '+' token op = self.current_token self.eat(PLUS) # we expect the current token to be a single-digit integer right = self.current_token self.eat(INTEGER) # after the above call the self.current_token is set to # EOF token # at this point INTEGER PLUS INTEGER sequence of tokens # has been successfully found and the method can just # return the result of adding two integers, thus # effectively interpreting client input result = left.value + right.value return result def main(): while True: try: # To run under Python3 replace 'raw_input' call # with 'input' text = raw_input('calc> ') except EOFError: break if not text: continue interpreter = Interpreter(text) result = interpreter.expr() print(result) if __name__ == '__main__': main()

Save the above code into calc1.py file or download it directly from GitHub. Before you start digging deeper into the code, run the calculator on the command line and see it in action. Play with it! Here is a sample session on my laptop (if you want to run the calculator under Python3 you will need to replace raw_input with input):

$ python calc1.py calc> 3+4 7 calc> 3+5 8 calc> 3+9 12 calc>

For your simple calculator to work properly without throwing an exception, your input needs to follow certain rules:

  • Only single digit integers are allowed in the input
  • The only arithmetic operation supported at the moment is addition
  • No whitespace characters are allowed anywhere in the input

Those restrictions are necessary to make the calculator simple. Don’t worry, you’ll make it pretty complex pretty soon.

Okay, now let’s dive in and see how your interpreter works and how it evaluates arithmetic expressions.

When you enter an expression 3+5 on the command line your interpreter gets a string “3+5”. In order for the interpreter to actually understand what to do with that string it first needs to break the input “3+5” into components called tokens. A token is an object that has a type and a value. For example, for the string “3” the type of the token will be INTEGER and the corresponding value will be integer 3.

The process of breaking the input string into tokens is called lexical analysis. So, the first step your interpreter needs to do is read the input of characters and convert it into a stream of tokens. The part of the interpreter that does it is called a lexical analyzer, or lexer for short. You might also encounter other names for the same component, like scanner or tokenizer. They all mean the same: the part of your interpreter or compiler that turns the input of characters into a stream of tokens.

The method get_next_token of the Interpreter class is your lexical analyzer. Every time you call it, you get the next token created from the input of characters passed to the interpreter. Let’s take a closer look at the method itself and see how it actually does its job of converting characters into tokens. The input is stored in the variable text that holds the input string and pos is an index into that string (think of the string as an array of characters). pos is initially set to 0 and points to the character ‘3’. The method first checks whether the character is a digit and if so, it increments pos and returns a token instance with the type INTEGER and the value set to the integer value of the string ‘3’, which is an integer 3:

The pos now points to the ‘+’ character in the text. The next time you call the method, it tests if a character at the position pos is a digit and then it tests if the character is a plus sign, which it is. As a result the method increments pos and returns a newly created token with the type PLUS and value ‘+’:

The pos now points to character ‘5’. When you call the get_next_token method again the method checks if it’s a digit, which it is, so it increments pos and returns a new INTEGER token with the value of the token set to integer 5:

Because the pos index is now past the end of the string “3+5” the get_next_token method returns the EOF token every time you call it:

Try it out and see for yourself how the lexer component of your calculator works:

>>> from calc1 import Interpreter >>> >>> interpreter = Interpreter('3+5') >>> interpreter.get_next_token() Token(INTEGER, 3) >>> >>> interpreter.get_next_token() Token(PLUS, '+') >>> >>> interpreter.get_next_token() Token(INTEGER, 5) >>> >>> interpreter.get_next_token() Token(EOF, None) >>>

So now that your interpreter has access to the stream of tokens made from the input characters, the interpreter needs to do something with it: it needs to find the structure in the flat stream of tokens it gets from the lexer get_next_token. Your interpreter expects to find the following structure in that stream: INTEGER -> PLUS -> INTEGER. That is, it tries to find a sequence of tokens: integer followed by a plus sign followed by an integer.

The method responsible for finding and interpreting that structure is expr. This method verifies that the sequence of tokens does indeed correspond to the expected sequence of tokens, i.e INTEGER -> PLUS -> INTEGER. After it’s successfully confirmed the structure, it generates the result by adding the value of the token on the left side of the PLUS and the right side of the PLUS, thus successfully interpreting the arithmetic expression you passed to the interpreter.

The expr method itself uses the helper method eat to verify that the token type passed to the eat method matches the current token type. After matching the passed token type the eat method gets the next token and assigns it to the current_token variable, thus effectively “eating” the currently matched token and advancing the imaginary pointer in the stream of tokens. If the structure in the stream of tokens doesn’t correspond to the expected INTEGER PLUS INTEGER sequence of tokens the eat method throws an exception.

Let’s recap what your interpreter does to evaluate an arithmetic expression:

  • The interpreter accepts an input string, let’s say “3+5”
  • The interpreter calls the expr method to find a structure in the stream of tokens returned by the lexical analyzer get_next_token. The structure it tries to find is of the form INTEGER PLUS INTEGER. After it’s confirmed the structure, it interprets the input by adding the values of two INTEGER tokens because it’s clear to the interpreter at that point that what it needs to do is add two integers, 3 and 5.

Congratulate yourself. You’ve just learned how to build your very first interpreter!

Now it’s time for exercises.

You didn’t think you would just read this article and that would be enough, did you? Okay, get your hands dirty and do the following exercises:

  1. Modify the code to allow multiple-digit integers in the input, for example “12+3”
  2. Add a method that skips whitespace characters so that your calculator can handle inputs with whitespace characters like ” 12 + 3”
  3. Modify the code and instead of ‘+’ handle ‘-‘ to evaluate subtractions like “7-5”

Check your understanding

  1. What is an interpreter?
  2. What is a compiler?
  3. What’s the difference between an interpreter and a compiler?
  4. What is a token?
  5. What is the name of the process that breaks input apart into tokens?
  6. What is the part of the interpreter that does lexical analysis called?
  7. What are the other common names for that part of an interpreter or a compiler?

Before I finish this article, I really want you to commit to studying interpreters and compilers. And I want you to do it right now. Don’t put it on the back burner. Don’t wait. If you’ve skimmed the article, start over. If you’ve read it carefully but haven’t done exercises - do them now. If you’ve done only some of them, finish the rest. You get the idea. And you know what? Sign the commitment pledge to start learning about interpreters and compilers today!

I, ________, of being sound mind and body, do hereby pledge to commit to studying interpreters and compilers starting today and get to a point where I know 100% how they work!



Sign it, date it, and put it somewhere where you can see it every day to make sure that you stick to your commitment. And keep in mind the definition of commitment:

“Commitment is doing the thing you said you were going to do long after the mood you said it in has left you.” — Darren Hardy

Okay, that’s it for today. In the next article of the mini series you will extend your calculator to handle more arithmetic expressions. Stay tuned.

If you can’t wait for the second article and are chomping at the bit to start digging deeper into interpreters and compilers, here is a list of books I recommend that will help you along the way:

  1. Language Implementation Patterns: Create Your Own Domain-Specific and General Programming Languages (Pragmatic Programmers)

  2. Writing Compilers and Interpreters: A Software Engineering Approach

  3. Modern Compiler Implementation in Java

  4. Modern Compiler Design

  5. Compilers: Principles, Techniques, and Tools (2nd Edition)

BTW, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch. You can get a feel for the book here, here, and here. Subscribe to the mailing list to get the latest updates about the book and the release date.


Categories: FLOSS Project Planets

Drupal core announcements: Recording from June 12th 2015 Drupal 8 critical issues discussion

Planet Drupal - Mon, 2015-06-15 05:56

It came up multiple times at recent events that it would be very helpful for people significantly working on Drupal 8 critical issues to get together more often to talk about the issues and unblock each other on things where discussion is needed. While these do not by any means replace the issue queue discussions (much like in-person meetings at events are not), they do help to unblock things much more quickly. We also don't believe that the number of or the concrete people working on critical issues should be limited, so we did not want to keep the discussions closed. After our second meeting last week, here is the recording of the third meeting from today in the hope that it helps more than just those who were on the meeting:

Unfortunately not all people invited made it this time. If you also have significant time to work on critical issues in Drupal 8 and we did not include you, let me know as soon as possible.

The issues mentioned were as follows:

Alex Pott
Rebuilding service container results in endless stampede: https://www.drupal.org/node/2497243
Twig placeholder filter should not map to raw filter: https://www.drupal.org/node/2495179

Francesco Placella
FieldItemInterface methods are only invoked for SQL storage and are inconsistent with hooks: https://www.drupal.org/node/2478459

Lee Rowlands
Make block context faster by removing onBlock event and replace it with loading from a BlockContextManager: https://www.drupal.org/node/2354889

Francesco Placella
FieldItemInterface methods are only invoked for SQL storage and are inconsistent with hooks: https://www.drupal.org/node/2478459

Alex Pott
Rewrite \Drupal\file\Controller\FileWidgetAjaxController::upload() to not rely on form cache https://www.drupal.org/node/2500527

Gábor Hojtsy
Twig placeholder filter should not map to raw filter: https://www.drupal.org/node/2495179

Daniel Wehner
drupal_html_id() considered harmful; remove ajax_html_ids to use GET (not POST) AJAX requests: https://www.drupal.org/node/1305882

Francesco Placella
Node revisions cannot be reverted per translation: https://www.drupal.org/node/2453153

Daniel Wehner
SA-CORE-2014-002 forward port only checks internal cache: https://www.drupal.org/node/2421503

Francesco Placella
Nat: it would be good to have your feedback on the proposed solution the translation revisions issue aside from its criticality (see https://www.drupal.org/node/2453153#comment-9991563 and following)

Fabian Franz
[PP-2] Remove support for #ajax['url'] and $form_state->setCached() for GET requests: https://www.drupal.org/node/2502785
Condition plugins should provide cache contexts AND cacheability metadata needs to be exposed: https://www.drupal.org/node/2375695
Make block context faster by removing onBlock event and replace it with loading from a BlockContextManager: https://www.drupal.org/node/2354889

Alex Pott
[meta] Identify necessary performance optimizations for common profiling scenarios: http://drupal.org/node/2470679

Nathaniel Catchpole
Core profiling scenarios: https://www.drupal.org/node/2497185
Node::isPublished() and Node:getOwnerId() are expensive: https://www.drupal.org/node/2498919
And User:getAnonymousUser() takes 13ms due to ContentEntityBase::setDefaultLangcode() (https://www.drupal.org/node/2504849) is similar.

Categories: FLOSS Project Planets

Jim Birch: Using CKFinder to organize image uploads by Content type in Drupal 7

Planet Drupal - Mon, 2015-06-15 05:00

As you may have noticed, /sites/default/files can quickly become a pretty busy place in your Drupal installation.  When creating image or file fields, we can add folders in the Drupal UI to organize the uploads.  But when we allow users to upload using the CKEditor WYSIWYG Editor, we have to work a bit harder to organize those uploads.

I am currently working on a project where we want to organize the uploads by content type.  Certain users have access to certain content types.  We want to be able to keep the separation going with the files.  Our goal is to have the wysiwyg uploads in the same folder as the "featured image" field on each content type, which is in /sites/default/files/[content-type].

What I quickly learned, was that IMCE is great in so many ways, and part of our normal Drupal install, but there is no obvious way to do this.  You can use IMCE to organize in a variety of different ways, like php date based folders and user id folders.  You could even have a roles based system, by creating an IMCE profile per role.  But I couldn't figure out a way to organize by field, or Content Type.

CKFinder to the rescue.  CKFinder is a premium file manager plugin for CKEditor.  When integrated with the CKEditor Drupal Module, both can be customized right in the Drupal UI.

Read more

Categories: FLOSS Project Planets

Alessio Treglia: How to have a successful OpenStack project

Planet Debian - Mon, 2015-06-15 04:30

It’s no secret that OpenStack is becoming the de-facto standard for private cloud and a way for telecom operators to differentiate against big names such as Amazon or Google.
OpenStack has already been adopted in some specific projects, but the wide adoption in enterprises is starting now, mostly because people simply find it difficult to understand. VMWare is still something to compare to, but OpenStack and cloud is different. While cloud implies virtualization, virtualization is not cloud.

Cloud is a huge shift in your organization and will change forever your way of working in the IT projects, improving your IT dramatically and cutting down costs.

In order to get the best of OpenStack, you need to understand deeply how cloud works. Moreover, you need to understand the whole picture beyond the software itself to provide new levels of agility, flexibility, and cost savings in your business.

Giuseppe Paterno’, leading European consultant and recently awarded by HP, wrote OpenStack Explained to guide you through the OpenStack technology and reveal his secret ingredient to have a successful project. You can download the ebook for a small donation to provide emergency and reconstruction aid for Nepal. Your donation is certified by ZEWO , the Swiss federal agency that ensures that funds go to a real charity project.

… but hurry up, the ebook is in a limited edition and it ends on July 2015.

Donate & Download here: https://life-changer.helvetas.ch/openstack

Categories: FLOSS Project Planets

PreviousNext: How to index panelizer node pages using Drupal Apache Solr module

Planet Drupal - Mon, 2015-06-15 03:44

Apache Solr Search is a great module for integrating your Drupal site with the powerful Apache Solr search tool. Out of the box it can index nodes and their fields, but Panelizer pages won't be indexed. In this post I show how you can get around this by indexing the rendered HTML of a panelizer node page.

Categories: FLOSS Project Planets

My Frustration with Mozilla

LinuxPlanet - Fri, 2015-06-12 11:51

I recently decided to stop using Firefox as my main Browser. I’m not alone there. While browser statistics are notoriously difficult to track and hotly debated, all sources seem to point toward a downward trend for Firefox. At LQ, they actually aren’t doing too badly. In 2010 Firefox had a roughly 57% market share and so far this year they’re at 37%. LQ is a highly technical site, however, and the broader numbers don’t look quite so good. Over a similar period, for example, Wikipedia has Firefox dropping from over 30% to just over 15%. At the current rate NetMarketShare is tracking, Firefox will be in the single digits some time this year. You get the idea. So what’s going on , and what does that mean for Mozilla? And why did I choose now to make a switch personally?

First, let me say it’s not all technical. While it’s troubling that they have not been able to track down some of the memory leaks and other issues for years, Firefox is an incredibly complex piece of software and overall it runs fine for me. Australis didn’t bother me as much as it did many, nor did the Pocket integration. I understand that the decision to include EME was a pragmatic one. I think the recent additional add-ons rules were as well. Despite these issues, I remained an ardent Firefox supporter who actively promoted its adoption. Taking a step back now, though, it is surprising to see just how many of the technical decisions they’re making are not being well received by the Firefox community. I think part of that is due to the fact that while Firefox started as the browser of the early adopter and power user, as it gained in popularity Mozilla felt pressure to make a more mainstream product and recently that pressure has manifested itself in Firefox looking more like Chrome. I think they’ve lost their way a little bit technically and have forgotten what actually made them popular, but that was not enough for me to stop using Firefox.

On a recent Bad Voltage episode, we discussed some of these issues (and more), with the intention of having someone from Mozilla on the next show to give feedback on our thoughts. After reaching out to Mozilla, they not only declined to participate, they declined to even provide a statement (there is a fair bit more to the story, but it’s off record and unfortunately I can’t provide further details at this time). This made me step back a bit and reassess what I thought about Mozilla as a whole. Something I hadn’t done in a while to be honest. Mozilla used to be a place where you were encouraged to speak your mind. What happened?

For context, I held Mozilla in the highest regard. It’s not hyperbole to say that I genuinely believe the Open Web would not be where it is today without what Mozilla has been able to accomplish. I consider their goals and the Mozilla Manifesto to be extremely important to the future of the web and it would be a shame to see us lose the freedom and openness we’ve fought so hard to gain. But somewhere along the line it appears to me Mozilla either forgot who they were, or who they were changed. Mozilla’s mission is “to promote openness, innovation & opportunity on the Web”. Looking at their actions recently, and I’m not just referring to the Bad Voltage-related decision, they don’t appear willing to be open or transparent about themselves. Their responses to incidents like the Pocket one resemble the response of a large stodgy corporation, not one of the Open Source spirited Mozilla I was accustomed to dealing with.

Maybe part of the issue is my perception. Many people, myself included, look at Mozilla as a bastion of freedom; the torch bearer for the free and Open Web. But the reality is that Mozilla is now a corporation, and one with over 1,000 employees. Emailing their PR department will get you a response from someone who used to work for CNN and the BBC. As companies grow, the culture often changes. The small, scrappy, steward of the Open Web may not exist any more. At least not in the pure concentrated form it used to; I know there is a solid core of it that very much burns within the larger organization. But this puts Mozilla in a really difficult position. They are not only losing market share rapidly, but doing so to a browser that is a product of the company that used to represent the vast majority of their revenue. With both revenue and market share declining, does Mozilla still have the clout it needs to direct the evolution of the web in a direction that is open and transparent?

I am a firm believer that the web would be a worse place without Mozilla. One of my largest concerns is that it appears many higher level Mozillians don’t seem to think anything is wrong. Perhaps they are too close to the issue, or so focused on the cause that it’s difficult or impossible to take a step back and assess where the organization came from, where they are and where they are going. Perhaps the organization is a little lost internally… struggling with decreasing market share of their main project, less than stellar adoption on mobile, interesting projects such as rust and servo taking resource and internal conflict about which direction is the best path forward. Whatever the case, it appears externally, based on the number of people leaving and the decreasing willingness to discuss anything, that something is systemically culturally amiss.

Or perhaps I’m wrong here and everything really is fine. Perhaps this is simply the result of an organization that has seen tremendous growth and this new grown up and more corporate Mozilla really is the best organization to move the Open Web forward. I’m interested in hearing what others think on this topic. Has Mozilla lost its way and if so, how? More importantly if so, how do we move forward and pragmatically address the issue(s)? I think Mozilla is too important to the future of the web to not at least ask these questions.

NOTE: We also discussed this topic on the most recent episode of Bad Voltage. You should listen to the entire episode, but I’ve included just the Mozilla segment here for your convenience.


PS: I have reached out to a few people at Mozilla to get their take on this. Ideally I’d like to have an interview with one or more of them up at LQ next week, but I don’t have any firm confirmations yet. If you work or worked at Mozilla and have something to add, feel free to post here or contact me directly so we can set something up. We need you Mozilla; let’s get this fixed.

Categories: FLOSS Project Planets

Bad Voltage Season 1 Episode 44 Has Been Released

LinuxPlanet - Fri, 2015-06-12 10:21

Jono Bacon, Bryan Lunduke (he returns!), Stuart Langridge and myself present Bad Voltage, in which all books are signed from now on, we reveal that we are coming to Europe in September and you can come to the live show, and:

  • 00:01:39 In the last show, Bad Voltage fixed Mozilla, or at least proposed what we think they might want to do to fix themselves. We asked Mozilla PR for comment or a statement, and they declined. This leads into a discussion about Mozilla’s internal culture, and how their relationships with the community have changed
  • 00:18:14 Stuart reviews Seveneves, the new book by Neal Stephenson
  • 00:29:28 Bad Voltage Fixes the F$*%ing World: we pick a technology or company or thing that we think isn’t doing what it should be, and discuss what it should be doing instead. We look at a company who have been in the news recently, but maybe wish they weren’t: Sourceforge
  • 00:51:30 Does social media advertising work? We tried a challenge: we’d each spend fifty dollars on advertising Bad Voltage on Twitter, Reddit, Facebook, and the like, and see how we got on and whether it’s worth the money. Is it? Maybe you can do better?

Listen to 1×44: Bad Voltage

As mentioned here, Bad Voltage is a project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.



Categories: FLOSS Project Planets
Syndicate content