Feeds

KnackForge: How to update Drupal 8 core?

Planet Drupal - Sat, 2018-03-24 01:01
How to update Drupal 8 core?

Let's see how to update your Drupal site between 8.x.x minor and patch versions. For example, from 8.1.2 to 8.1.3, or from 8.3.5 to 8.4.0. I hope this will help you.

  • If you are upgrading to Drupal version x.y.z

           x -> is known as the major version number

           y -> is known as the minor version number

           z -> is known as the patch version number.

Sat, 03/24/2018 - 10:31
Categories: FLOSS Project Planets

heykarthikwithu: Composer in Drupal 8 - Manage dependencies

Planet Drupal - 2 hours 26 min ago
Composer in Drupal 8 - Manage dependencies

Install Modules/Themes via Composer in Drupal 8

 

heykarthikwithu Monday, 23 October 2017 - 11:32:54 IST
Categories: FLOSS Project Planets

Russ Allbery: Review: Algorithms to Live By

Planet Debian - 3 hours 50 min ago

Review: Algorithms to Live By, by Brian Christian & Tom Griffiths

Publisher: Henry Holt and Company Copyright: April 2016 ISBN: 1-62779-037-3 Format: Kindle Pages: 255

Another read for the work book club. This was my favorite to date, apart from the books I recommended myself.

One of the foundations of computer science as a field of study is research into algorithms: how do we solve problems efficiently using computer programs? This is a largely mathematical field, but it's often less about ideal or theoretical solutions and more about making the most efficient use of limited resources and arriving at an adequate, if not perfect, answer. Many of these problems are either day-to-day human problems or are closely related to them; after all, the purpose of computer science is to solve practical problems with computers. The question asked by Algorithms to Live By is "can we reverse this?": can we learn lessons from computer science's approach to problems that would help us make day-to-day decisions?

There's a lot of interesting material in the eleven chapters of this book, but there's also an amusing theme: humans are already very good at this. Many chapters start with an examination of algorithms and mathematical analysis of problems, dive into a discussion of how we can use those results to make better decisions, then talks about studies of the decisions humans actually make... and discovers that humans are already applying ad hoc versions of the best algorithms we've come up with, given the constraints of typical life situations. It tends to undermine the stated goal of the book. Thankfully, it in no way undermines interesting discussion of general classes of problems, how computer science has tackled them, and what we've learned about the mathematical and technical shapes of those problems. There's a bit less self-help utility here than I think the authors had intended, but lots of food for thought.

(That said, it's worth considering whether this congruence is less because humans are already good at this and more because our algorithms are designed from human intuition. Maybe our best algorithms just reflect human thinking. In some cases we've checked our solutions against mathematical ideals, but in other cases they're still just our best guesses to date.)

This is the sort of a book where a chapter listing is an important part of the review. The areas of algorithms discussed here are optimal stopping, explore/exploit decisions (when to go with the best thing you've found and when to look for something better), sorting, caching, scheduling, Bayes's rule (and prediction in general), overfitting when building models, relaxation (solving an easier problem than your actual problem), randomized algorithms, a collection of networking algorithms, and finally game theory. Each of these has useful insights and thought-provoking discussion of how these sometimes-theoretical concepts map surprisingly well onto daily problems. The book concludes with a discussion of "computational kindness": an encouragement to reduce the required computation and complexity penalty for both yourself and the people you interact with.

If you have a computer science background (as I do), many of these will be familiar concepts, and you might be dubious that a popularization would tell you much that's new. Give this book a shot, though; the analogies are less stretched than you might fear, and the authors are both careful and smart about how they apply these principles. This book passes with flying colors a key sanity check: the chapters on topics that I know well or have thought about a lot make few or no obvious errors and say useful and important things. For example, the scheduling chapter, which unsurprisingly is about time management, surpasses more than half of the time management literature by jumping straight to the heart of most time management problems: if you're going to do everything on a list, it rarely matters the order in which you do it, so the hardest scheduling problems are about deciding what not to do rather than deciding order.

The point in the book where the authors won my heart completely was in the chapter on Bayes's rule. Much of the chapter is about Bayesian priors, and how one's knowledge of past events is a vital part of analysis of future probabilities. The authors then discuss the (in)famous marshmallow experiment, in which children are given one marshmallow and told that if they refrain from eating it until the researcher returns, they'll get two marshmallows. Refraining from eating the marshmallow (delayed gratification, in the psychological literature) was found to be associated with better life outcomes years down the road. This experiment has been used and abused for years for all sorts of propaganda about how trading immediate pleasure for future gains leads to a successful life, and how failure in life is because of inability to delay gratification. More evil analyses have (of course) tied that capability to ethnicity, with predictably racist results.

I have kind of a thing about the marshmallow experiment. It's a topic that reliably sends me off into angry rants.

Algorithms to Live By is the only book I have ever read to mention the marshmallow experiment and then apply the analysis that I find far more convincing. This is not a test of innate capability in the children; it's a test of their Bayesian priors. When does it make perfect sense to eat the marshmallow immediately instead of waiting for a reward? When their past experience tells them that adults are unreliable, can't be trusted, disappear for unpredictable lengths of time, and lie. And, even better, the authors supported this analysis with both a follow-up study I hadn't heard of before and with the observation that some children would wait for some time and then "give in." This makes perfect sense if they were subconsciously using a Bayesian model with poor priors.

This is a great book. It may try a bit too hard in places (applicability of the math of optimal stopping to everyday life is more contingent and strained than I think the authors want to admit), and some of this will be familiar if you've studied algorithms. But the writing is clear, succinct, and very well-edited. No part of the book outlives its welcome; the discussion moves right along. If you find yourself going "I know all this already," you'll still probably encounter a new concept or neat explanation in a few more pages. And sometimes the authors make connections that never would have occurred to me but feel right in retrospect, such as relating exponential backoff in networking protocols to choosing punishments in the criminal justice system. Or the realization that our modern communication world is not constantly connected, it's constantly buffered, and many of us are suffering from the characteristic signs of buffer bloat.

I don't think you have to be a CS major, or know much about math, to read this book. There is a lot of mathematical details in the end notes if you want to dive in, but the main text is almost always readable and clear, at least so far as I could tell (as someone who was a CS major and has taken a lot of math, so a grain of salt may be indicated). And it still has a lot to offer even if you've studied algorithms for years.

The more I read of this book, the more I liked it. Definitely recommended if you like reading this sort of analysis of life.

Rating: 9 out of 10

Categories: FLOSS Project Planets

Julian Andres Klode: APT 1.6 alpha 1 – seccomp and more

Planet Debian - Sun, 2017-10-22 20:44

I just uploaded APT 1.6 alpha 1, introducing a very scary thing: Seccomp sandboxing for methods, the programs downloading files from the internet and decompressing or compressing stuff. With seccomp I reduced the number of system calls these methods can use to 149 from 430. Specifically we excluded most ways of IPC, xattrs, and most importantly, the ability for methods to clone(2), fork(2), or execve(2) (or execveat(2)). Yes, that’s right – methods can no longer execute programs.

This was a real problem, because the http method did in fact execute programs – there is this small option called ProxyAutoDetect or Proxy-Auto-Detect where you can specify a script to run for an URL and the script outputs a (list of) proxies. In order to be able to seccomp the http method, I moved the invocation of the script to the parent process. The parent process now executes the script within the sandbox user, but without seccomp (obviously).

I tested the code on amd64, ppc64el, s390x, arm64, mipsel, i386, and armhf. I hope it works on all other architectures libseccomp is currently built for in Debian, but I did not check that, so your apt might be broken now if you use powerpc, powerpcspe, armel, mips, mips64el, hhpa, or x32 (I don’t think you can even really use x32).

Also, apt-transport-https is gone for good now. When installing the new apt release, any installed apt-transport-https package is removed (apt breaks apt-transport-https now, but it also provides it versioned, so any dependencies should still be satisfiable).

David also did a few cool bug fixes again, finally teaching apt-key to ignore unsupported GPG key files instead of causing weird errors

Categories: FLOSS Project Planets

Dirk Eddelbuettel: linl 0.0.1: linl is not Letter

Planet Debian - Sun, 2017-10-22 17:06

Aaron Wolen and I are pleased to announce the availability of the initial 0.0.1 release of our new linl package on the CRAN network. It provides a simple-yet-powerful Markdown---and RMarkdown---wrapper the venerable LaTeX letter class. Aaron had done the legwork in the underlying pandoc-letter repository upon which we build via proper rmarkdown integration.

The package also includes a LaTeX trick or two: optional header and signature files, nicer font, better size, saner default geometry and more. See the following screenshot which shows the package vignette---itself a simple letter---along with (most of) its source:

The initial (short) NEWS entry follows:

Changes in tint version 0.0.1 (2017-10-17)
  • Initial CRAN release

The date is a little off; it took a little longer than usual for the good folks at CRAN to process the initial submission. We expect future releases to be more timely.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Iain R. Learmonth: Free Software Efforts (2017W42)

Planet Debian - Sun, 2017-10-22 16:45

Here’s my weekly report for week 42 of 2017. In this week I have replaced my spacebar, failed to replace a HDD and begun the process to replace my YubiKey.

Debian

Eariler in the week I blogged about powerline-taskwarrior . There is a new upstream version available that includes the patches I had produced for Python 2 support and I have filed #879225 to remind me to package this.

The state of emscripten is still not great, and as I don’t have the time to chase this up and I certainly don’t have the time to fix it myself, I’ve converted the ITP for csdr to an RFP.

As I no longer have the time to maintain map.debian.net, I have released this domain name and published the sources behind the service.

Tor Project

There was a request to remove the $ from family fingerprint on Atlas. These actually come from Onionoo and we have decided to fix this in Onionoo, but I did push a small fix for Atlas this week that makes sure that Atlas doesn’t care if there are $ prefixes or not.

I requested that a Trac component be created for metrics-bot. I wrote a seperate post about metrics-bot.

I also attended the weekly metrics team meeting.

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

I do not find it likely that I’ll be travelling to Cambridge for the miniDebConf as the train alone would be around £350 and hotel accomodation a further £600 (to include both me and Ana).

Categories: FLOSS Project Planets

Steinar H. Gunderson: Introducing Narabu, part 3: Parallel structure

Planet Debian - Sun, 2017-10-22 16:39

Narabu is a new intraframe video codec. You probably want to read part 1 and part 2 first.

Now having a rough idea of how the GPU works, it's obvious that we need our codec to split the image into a lot of independent chunks to be parallel; we're aiming for about a thousand chunks. We'll also aim for a format that's as simple as possible; other people can try to push the frontiers on research, I just want something that's fast, reasonably good, and not too hard to implement.

First of all, let's look at JPEG. It works by converting the image to Y'CbCr (usually with subsampled chroma), split each channel into 8x8 blocks, DCT them, quantize and then use Huffman coding to encode those coefficients. This is a very standard structure, and JPEG, despite being really old by now, is actually hard to beat, so let's use that as a template.

But we probably can't use JPEG wholesale. Why? The answer is that the Huffman coding is just too serial. It's all just one stream, and without knowing where the previous symbol ends, we don't know where to start decoding the next. You can partially solve this by having frequent restart markers, but these reduce coding efficiency, and there's no index of them; you'd have to make a parallel scan for them, which is annoying. So we'll need to at least do something about the entropy coding.

Do we need to change anything else to reach our ~1000 parallel target? 720p is the standard video target these days; the ~1M raw pixels would be great if we could do all of them independently, but we obviously can't, since DCT is not independent per pixel (that would sort of defeat the purpose). However, there are 14,400 DCT blocks (8x8), so we can certainly do all the DCTs in parallel. Quantization after that is trivially parallelizable, at least as long as we don't aim for trellis or the likes. So indeed, it's only entropy coding that we need to worry about.

Since we're changing entropy coding anyway, I picked rANS, which is confusing but has a number of neat properties; it's roughly as expensive as Huffman coding, but has an efficiency closer to that of arithmetic encoding, and you can switch distributions for each symbol. It requires a bit more calculation, but GPUs have plenty of ALUs (a typical recommendation is 10:1 calculation to memory access), so that should be fine. I picked a pretty common variation with 32-bit state and 8-bit I/O, since 64-bit arithmetic is not universally available on GPUs (I checked a variation with 64-bit state and 32-bit I/O, but it didn't really matter much for speed). Fabian Giesen has described how you can actually do rANS encoding and decoding in parallel over a warp, but I've treated it as a purely serial operation. I don't do anything like adaptation, though; each coefficient is assigned to one out of four rANS static distributions that are signaled at the start of each stream. (Four is a nice tradeoff between coding efficiency, L1 cache use and the cost of transmitting the distributions themselves. I originally used eight, but a bit of clustering showed it was possible to get it down to four at basically no extra cost. JPEG does a similar thing, with separate Huffman codes for AC and DC coefficients. And I've got separate distributions for luma and chroma, which also makes a lot of sense.)

The restart cost of rANS is basically writing the state to disk; some of that we would need to write even without a restart, and there are ways to combine many such end states for less waste (I'm not using them), but let's be conservative and assume we waste all of it for each restart. At 150 Mbit/second for 720p60, the luma plane of one frame is about 150 kB. 10% (15 kB) sounds reasonable to sacrifice to restarts, which means we can have about 3750 of them, or one for each 250th pixel or so. (3750 restarts also means 3750 independent entropy coding streams, so we're still well above our 1000 target.)

So the structure is more or less clear; we'll DCT the blocks in parallel (since an 8x8 block can be expressed as 8 vertical DCTs and then 8 horizontal DCTs, we can even get 8-way parallelism there), and then encode at least 250 coefficients into an rANS stream. I've chosen to let a stream encompass a single coefficient from each of 320 DCT blocks instead of all coefficients from 5 DCT blocks; it's sort of arbitrary and might be a bit unintuitive, but it feels more natural, especially since you don't need to switch rANS distributions for each coefficient.

So that's a lot of rambling for a sort-of abstract format. Next time we'll go into how decoding works. Or encoding. I haven't really made up my mind yet.

Categories: FLOSS Project Planets

Jaime Buelta: A Django project template for a RESTful Application using Docker

Planet Python - Sun, 2017-10-22 14:58
I used what I learn and some decisions to create a template for new projects. Part of software development is mainly plumbing. Laying bricks together and connecting parts so the important bits of software can be accessing. That’s a pretty important part of the work, but it can be quite tedious and frustrating. This is … Continue reading A Django project template for a RESTful Application using Docker
Categories: FLOSS Project Planets

KDE Edu Sprint 2017

Planet KDE - Sun, 2017-10-22 14:52

My first KDE sprint. I am glad that It finally happened!

A day before I left for Berlin, I did not even have my VISA. Well, everything from getting the VISA to booking the flight tickets and finally packing the bags happened a day before I left for Berlin

We all landed in Berlin on 6th, a day before the sprint started. I think we all took a lot of rest that day because most of us were tired with all the traveling. We all met on 7th Morning at the reception of the hotel we were staying in where I finally got to meet people I only knew through IRC nick names. From there we headed to have some breakfast and finally to Endocode where our sprint took place.

Before we started working on our tasks, we had discussions regarding how to make the KDE Edu website better, making applications easily accessible to Universities/schools etc. We prepared a list of tasks that could be done during the sprint and finally started working on them.

I mostly worked on Cantor,completing some part of what I could not complete during GSoC. After a few discussions and help from my mentor Filipe, I merged some of my to work  to qprocess_port branch   and started working on polishing the R backend of Cantor. Filipe plans to release a new version in December, for which he plans to merge my work for R backend to master and for that to happen I need to make the syntax highlighter and tab completion of R backend work. I completed some part of it during the sprint.

I also had discussion with Timothee regarding GCompris-Server , showed him my work and discussed what should be the next step for it

Well, that’s all I did. I had a great time throughout the sprint and It was a pleasure meeting fellow KDE developers and specially my mentors Filipe and  Timothee

Here’s a picture we took(without Aleix and David)  just outside of where Qt world summit happened. Aleix was finding a new place to stay(If i remember correctly) and David had some other work.

 

Big Thank you to KDE e.V. board for sponsoring my trip and Endocode for providing us a office and access to unlimited drinks and a weird bottle opener


Categories: FLOSS Project Planets

Hideki Yamane: openSUSE.Asia Summit 2017 in Tokyo

Planet Debian - Sun, 2017-10-22 14:35
This weekend large typhoon is approaching to Japan, however, I went to UEC (The University of Electro-Communications, 電気通信大学) to give a talk in openSUSE.Asia Summit 2017 in Tokyo.

"... hey, wait. openSUSE? Are you using openSUSE?"

Honestly no, I'm not. My talk was "openSUSE tools on Debian" - the only one session about Debian in that conference :)

Photo by Youngbin Han (Ubuntu Korea Loco), thanks!

We Debian distribute some tools from openSUSE - OBS (Open Build Service), Snapper and I'm now working on openQA. So, it is a good chance to contact to upstream (=openSUSE) people, and I got some hints from them, thanks!


openSUSE tools on Debian from Hideki Yamane
Categories: FLOSS Project Planets

Phil Steitz: Selling the truth

Planet Apache - Sun, 2017-10-22 13:31
This month's Significance magazine includes a jarring article about fake news.  As one would expect in Significance, there is interesting empirical data in the article.  What I found most interesting was the following quote attributed to Dorothy Byrne, a British broadcast journalism leader:
"You can't just feed people a load of facts...we are social animals, we relate to other people, so we have to always have a mixture of telling people's human stories while at the same time giving context to those stories and giving the real facts." Just presenting and objectively supporting debunking factual evidence is not sufficient.  We need to acknowledge that just as emotional triggers are key to spreading fake news, so they need to be considered in repairing the damage.  I saw a great example of that in yesterday's Wall Street Journal.  An article, titled "Video Contradicts Kelly's Criticism of Congresswoman," sets out to debunk the fake news story promulgated by the Trump administration claiming that Florida Rep. Frederica Wilson had touted her personal efforts in getting funding for an FBI building in her district while not acknowledging the slain FBI agents for whom the building was named.  The Journal article could have stopped at the factual assertions that she had not been elected when the funding was approved and that a video of the speech she gave includes her acknowledging the agents.  But it goes on to provide emotive context, describing the Congresswoman's lifelong focus on issues affecting low-income families and her personal connection with Army Sgt. La David Johnson, the Green Beret whose passing ultimately led to her confrontation with the Trump administration.  The details on how she had known Sgt. Johnson's family for generations and that he himself had participated in a mentoring program that she founded provided context for the facts.  The emotive picture painted by the original fake news claim and the administration's name-calling "all hat, no cattle" was replaced with the image of a caring human being.  In that light, it's easier to believe the truth - that Rep. Wilson was gracious and respectful of the fallen agents and their families just as she was of Sgt. Johnson and his family.

The lesson learned here is that in debunking fake news, "factual outrage" is not enough - we need to focus on selling the truth as the more emotionally satisfying position.  As the Significance article points out, people are drawn to simple explanations and beliefs that fit with what they want to be true.  So to repair the damage of fake news, we have to not just show people that their beliefs are inconsistent with reality - we need to provide them with another, emotionally acceptable reality that is closer to the truth.


Categories: FLOSS Project Planets

Claus Ibsen: Working with large messages using Apache Camel and ActiveMQ Artemis improved in upcoming Camel 2.21 release

Planet Apache - Sun, 2017-10-22 08:45
Historically the Apache ActiveMQ message broker was originally created in a time where large messages was measured in MB and not in GB as you may do today.

This is not the case with the next generation broker Apache ActiveMQ Artemis (or just Artemis) which has much better support for large messages.

So its about time that the Camel team finally had some time to work on this to ensure Camel work well with Artemis and large messages. This work was committed this weekend and we provided an example to demonstrate this.

The example runs Camel with the following two small routes:



The first route just route files to a queue on the message broker named data. The 2nd route does the opposite, routes from the data queue to file.

Pay attention to the 2nd route as it has turned on Camel's stream caching. This ensures that Camel will deal with large streaming payloads in a manner where Camel can automatic spool big streams to temporary disk space to avoid taking up memory. The stream caching in Apache Camel is fully configurable and you can setup thresholds that are based on payload size, memory left in the JVM etc to trigger when to spool to disk. However the default settings are often sufficient.

Camel then uses the JMS component to integrate with the ActiveMQ Artemis broker which you setup as follows:



This is all standard configuration (you should consider setting up a connection pool as well).

The example requires to run a ActiveMQ Artemis message broker separately in a JVM, and then start the Camel JVM with a lower memory setting such as 128mb or 256mb etc which can be done via Maven:

  export MAVEN_OPTS="-Xmx256m"

And then you run Camel via Maven

  mvn camel:run

When the application runs, you can then copy big files to the target/inbox directory, which should then stream these big messages to the Artemis broker, and then back again to Camel which will then save this to the target/outbox directory.

For example I tired this by copying a 1.6gb docker VM file, and Camel will log the following:
INFO  Sending file disk.vmdk to Artemis
INFO  Finish sending file to Artemis
INFO  Received data from Artemis
INFO  Finish saving data from Artemis as file

And we can see the file is saved again, and its also the correct size of 1.6gb

$ ls -lh target/outbox/
total 3417600
-rw-r--r--  1 davsclaus  staff   1.6G Oct 22 14:39 disk.vmdk

I attached jconsole to the running Camel JVM and monitored the memory usage which is shown in the graph:


The graph shows that the heap memory peaked at around 130mb and that after GC its back down to around 50mb. The JVM is configured with a max of 256mb.

You can find detailed step by step instructions with the example how exactly to run the example, so you can try for yourself. The example is part of the upcoming Apache Camel 2.21 release, where the camel-jms component has been improved for supporting javax.jms.StreamMessage types and has special optimisation for ActiveMQ Artemis as demonstrated by this example.

PS: The example could be written in numerous ways, but instead of creating yet another Spring Boot based example we chose to just use plain XML. In the end Camel does not care, you can implement and use Camel anyhow you like.


Categories: FLOSS Project Planets

EuroPython: EuroPython 2017: Videos for Thursday available online

Planet Python - Sun, 2017-10-22 08:02

We are pleased to announce the third batch of cut videos for EuroPython 2017.

To see the new videos, please head over to our EuroPython YouTube channel and select the “EuroPython 2017″ playlist. The new videos start at entry 96 in the playlist.

Next week we will release the last batch of videos currently marked as “private”. 

Enjoy,

EuroPython 2017 Team
EuroPython Society
EuroPython 2017 Conference 

Categories: FLOSS Project Planets

Catalin George Festila: The Google Cloud Pub/Sub python module.

Planet Python - Sun, 2017-10-22 07:59
This is a test of google feature from cloud.google.com/pubsub web page.
The google development team tell us about this service:
The Google Cloud Pub/Sub service allows applications to exchange messages reliably, quickly, and asynchronously. To accomplish this, a producer of data publishes a messages to a Cloud Pub/Sub topic. A subscriber client then creates a subscription to that topic and consumes messages from the subscription. Cloud Pub/Sub persists messages that could not be delivered reliably for up to seven days. This page shows you how to get started publishing messages with Cloud Pub/Sub using client libraries.
The simple idea about this is:
Publisher applications can send messages to a topic, and other applications can subscribe to that topic to receive the messages.
I start with the installation of the python module using python version 2.7 and pip tool.
C:\Python27>cd Scripts

C:\Python27\Scripts>pip install --upgrade google-cloud-pubsub
Collecting google-cloud-pubsub
Downloading google_cloud_pubsub-0.28.4-py2.py3-none-any.whl (79kB)
100% |################################| 81kB 300kB/s
...
Successfully installed google-cloud-pubsub-0.28.4 grpc-google-iam-v1-0.11.4 ply-3.8
psutil-5.4.0 pyasn1-modules-0.1.5 setuptools-36.6.0
The next steps come with some settings on google console, see this google page.
The default settings can be started and set with this command: gcloud init .
You need to edit this settings and app.yaml at: ~/src/.../appengine/flexible/pubsub$ nano app.yaml and your .
After you set all of this using the command gcloud app deploy you can see the output at https://[YOUR_PROJECT_ID].appspot.com.
The main goal of this tutorial was to start and run the Google Cloud Pub/Sub service with python and this has been achieved.
Categories: FLOSS Project Planets

Michael Stapelberg: Which VCS do Debian’s Go package upstreams use?

Planet Debian - Sun, 2017-10-22 07:20

In the pkg-go team, we are currently discussing which workflows we should standardize on.

One of the considerations is what goes into the “upstream” Git branch of our repositories: should it track the upstream Git repository, or should it contain orig tarball imports?

Now, tracking the upstream Git repository only works if upstream actually uses Git. The go tool, which is widely used within the Go community for managing Go packages, supports Git, Mercurial, Bazaar and Subversion. But which of these are actually used in practice?

Let’s find out!

Option 1: If you have the sources lists of all suites locally anyway /usr/lib/apt/apt-helper cat-file \ $(apt-get indextargets --format '$(FILENAME)' 'ShortDesc: Sources' 'Origin: Debian') \ | sed -n 's,Go-Import-Path: ,,gp' \ | sort -u Option 2: If you prefer to use a relational database over textfiles

This is the harder option, but also the more complete one.

First, we’ll need the Go package import paths of all Go packages which are in Debian. We can get them from the ProjectB database, Debian’s main PostgreSQL database containing all of the state about the Debian archive.

Unfortunately, only Debian Developers have SSH access to a mirror of ProjectB at the moment. I contacted DSA to ask about providing public ProjectB access.

ssh mirror.ftp-master.debian.org "echo \"SELECT value FROM source_metadata \ LEFT JOIN metadata_keys ON (source_metadata.key_id = metadata_keys.key_id) \ WHERE metadata_keys.key = 'Go-Import-Path' GROUP BY value\" | \ psql -A -t service=projectb" > go_import_path.txt

I uploaded a copy of resulting go_import_path.txt, if you’re curious.

Now, let’s come up with a little bit of Go to print the VCS responsible for each specified Go import path:

go get -u golang.org/x/tools/go/vcs cat >vcs4.go <<'EOT' package main import ( "fmt" "log" "os" "sync" "golang.org/x/tools/go/vcs" ) func main() { var wg sync.WaitGroup for _, arg := range os.Args[1:] { wg.Add(1) go func(arg string) { defer wg.Done() rr, err := vcs.RepoRootForImportPath(arg, false) if err != nil { log.Println(err) return } fmt.Println(rr.VCS.Name) }(arg) } wg.Wait() } EOT

Lastly, run it in combination with uniq(1) to discover…

go run vcs4.go $(tr '\n' ' ' < go_import_path.txt) | sort | uniq -c 760 Git 1 Mercurial
Categories: FLOSS Project Planets

Sean Whitton: Debian Policy call for participation -- October 2017

Planet Debian - Sat, 2017-10-21 22:29

Here’s are some of the bugs against the Debian Policy Manual. In particular, there really are quite a few patches needing seconds from DDs.

Consensus has been reached and help is needed to write a patch:

#759316 Document the use of /etc/default for cron jobs

#761219 document versioned Provides

#767839 Linking documentation of arch:any package to arch:all

#770440 policy should mention systemd timers

#773557 Avoid unsafe RPATH/RUNPATH

#780725 PATH used for building is not specified

#793499 The Installed-Size algorithm is out-of-date

#810381 Update wording of 5.6.26 VCS-* fields to recommend encryption

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs:

#688251 Built-Using description too aggressive

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#810381 Update wording of 5.6.26 VCS-* fields to recommend encryption

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#850729 Documenting special version number suffixes

#864615 please update version of posix standard for scripts (section 10.4)

#874090 Clarify wording of some passages

#874095 copyright-format: Use the “synopsis” term established in the de…

Merged for the next release:

#683495 perl scripts: ”#!/usr/bin/perl” MUST or SHOULD?

#877674 [debian-policy] update links to the pdf and other formats of the do…

#878523 [PATCH] Spelling fixes

Categories: FLOSS Project Planets

Sandipan Dey: Feature Detection with Harris Corner Detector and Matching images with Feature Descriptors in Python

Planet Python - Sat, 2017-10-21 20:48
The following problem appeared in a project in this Computer Vision Course (CS4670/5670, Spring 2015) at Cornell. In this article, a python implementation is going to be described. The description of the problem is taken (with some modifications) from the project description. The same problem appeared in this assignment problem as well. The images used … Continue reading Feature Detection with Harris Corner Detector and Matching images with Feature Descriptors in Python
Categories: FLOSS Project Planets

Bryan Pendleton: Abaddon's Gate: a very short review

Planet Apache - Sat, 2017-10-21 12:06

Book three of the Expanse series is Abaddon's Gate.

Abaddon's Gate starts out as a continuation of books one and two.

Which is great, and I would have been just fine with that.

But then, about halfway through (page 266, to be exact), Abaddon's Gate takes a sudden and startling 90 degree turn, revealing that much of what you thought you knew from the first two books is completely wrong, and exposing a whole new set of ideas to contemplate.

And so, then, off we go, in a completely new direction!

One of the things I'm really enjoying about the series is the "long now" perspective that it takes. You might think that a couple thousand years of written history is a pretty decent accomplishment for a sentient species, but pah! that's really nothing, in the big picture of things.

If you liked the first two books, you'll enjoy Abaddon's Gate. If you didn't like any of this, well, you probably figured that out about 50 pages into Leviathan Wakes and that's fine, too.

Categories: FLOSS Project Planets

Bryan Pendleton: Bryan's simple rules for online security

Planet Apache - Sat, 2017-10-21 11:56

I seem to be posting a lot less frequently recently. I was traveling, work has been crazy busy, you know how it goes. Oh, well.

I was looking at some stuff while I was traveling, and reviewing what I thought, and decided it still holds, so I decided to post it here.

It ain't perfect, but then nothing is, and besides which you get what you paid for, so here are my 100% free of charge simple rules for online security:

  • Always do your banking and other important web accesses from your own personal computer, not from public computers like those in hotel business centers, coffee shops, etc.
  • Always use Chrome or Firefox to access "important" web sites like the bank, credit cards, Amazon, etc.
  • Always use https:// URLs for those web sites
  • Always let Windows Update automatically update your computer as it wants to, also always let Chrome and Firefox update themselves when they want to.
  • Stick with GMail, it's about as secure as email can get right now. Train yourself to be suspicious of weird mails from unknown senders, or with weird links in them, just in case you get a "phishing" mail that pretends to be from your bank or credit card company, etc.
  • If you get a mail from a company you care about (bank, retirement account, credit card, health care company, etc.), instead of clicking on the link in the mail, ALWAYS open up a new browser window and type in the https:// URL of the bank or whatever yourself. It's clicking the link in the email that gets you in trouble.
  • At least once a week or so, sign on and look at your credit card charges, your bank account, your retirement account, etc., just to see that everything on there looks as it should be. If not, call your bank and credit card company, dispute the charge, and ask them to send you a new credit card, ATM card, whatever it was.
  • Don't accept phone calls from people who aren't in your contacts, or whose call you didn't expect. If you accept a phone call that you think might be legitimate (e.g., from your bank or credit card company), but you need to discuss your account, hang up and call them back, using the main service number from their web site, not the number that called you. Never answer "security questions" over the phone unless you initiated the call yourself. Con artists that call you on the phone can be really persuasive, this is actually the biggest threat nowadays I think.
If you do these simple things, you have made yourself a sufficiently "hard" target that the bad guys will go find somebody who's a lot easier to attack instead of you.
Categories: FLOSS Project Planets
Syndicate content