FLOSS Project Planets

Dirk Eddelbuettel: nanotime 0.2.0

Planet Debian - Thu, 2017-06-22 08:16

A new version of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic.

Thanks to a metric ton of work by Leonardo Silvestri, the package now uses S4 classes internally allowing for greater consistency of operations on nanotime objects.

Changes in version 0.2.0 (2017-06-22)
  • Rewritten in S4 to provide more robust operations (#17 by Leonardo)

  • Ensure tz="" is treated as unset (Leonardo in #20)

  • Added format and tz arguments to nanotime, format, print (#22 by Leonardo and Dirk)

  • Ensure printing respect options()$max.print, ensure names are kept with vector (#23 by Leonardo)

  • Correct summary() by defining names<- (Leonardo in #25 fixing #24)

  • Report error on operations that are meaningful for type; handled NA, NaN, Inf, -Inf correctly (Leonardo in #27 fixing #26)

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Python Anywhere: The PythonAnywhere API: beta now available for all users

Planet Python - Thu, 2017-06-22 08:12

We've been slowly developing an API for PythonAnywhere, and we've now enabled it so that all users can try it out if they wish. Head over to your accounts page and find the "API Token" tab to get started.

The API is still very much in beta, and it's by no means complete! We've started out with a few endpoints that we thought we ourselves would find useful, and some that we needed internally.

Yes, I have revoked that token :)

Probably the most interesting functionality that it exposes at the moment is the ability to create, modify, and reload a webapp remotely, but there's a few other things in there as well, like file and console sharing. You can find a full list of endpoints here: http://help.pythonanywhere.com/pages/API

We're keen to hear your thoughts and suggestions!

Categories: FLOSS Project Planets

Norbert Preining: Signal handling in R

Planet Debian - Thu, 2017-06-22 04:28

Recently I have been programming quite a lot in R, and today stumbled over the problem to implement a kind of monitoring loop in R. Typically that would be a infinite loop with sleep calls, but I wanted to allow for waking up from the sleep via sending UNIX style signals, in particular SIGINT. After some searching I found Beyond Exception Handling: Conditions and Restarts from the Advanced R book. But it didn’t really help me a lot to program an interrupt handler.

My requirements were:

  • an interruption of the work-part should be immediately restarted
  • an interruption of the sleep-part should go immediately into the work-part

Unfortunately it seems not to be possible to ignore interrupts at all from with the R code. The best one can do is install interrupt handlers and try to repeat the code which was executed while the interrupt happened. This is what I tried to implement with the following code below. I still have to digest the documentation about conditions and restarts, and play around a lot, but at least this is an initial working version.

workfun <- function() { i <- 1 do_repeat <- FALSE while (TRUE) { message("begin of the loop") withRestarts( { # do all the work here cat("Entering work part i =", i, "\n"); Sys.sleep(10) i <- i + 1 cat("finished work part\n") }, gotSIG = function() { message("interrupted while working, restarting work part") do_repeat <<- TRUE NULL } ) if (do_repeat) { cat("restarting work loop\n") do_repeat <- FALSE next } else { cat("between work and sleep part\n") } withRestarts( { # do the sleep part here cat("Entering sleep part i =", i, "\n") Sys.sleep(10) i <- i + 1 cat("finished sleep part\n") }, gotSIG = function() { message("got work to do, waking up!") NULL } ) message("end of the loop") } } cat("Current process:", Sys.getpid(), "\n"); withCallingHandlers({ workfun() }, interrupt = function(e) { invokeRestart("gotSIG") })

While not perfect, I guess I have to live with this method for now.

Categories: FLOSS Project Planets

A. Jesse Jiryu Davis: New Driver Features for MongoDB 3.6

Planet Python - Thu, 2017-06-22 04:12

At MongoDB World this week, we announced the big features in our upcoming 3.6 release. I’m a driver developer, so I’ll share the details about the driver improvements that are coming in this version. I’ll cover six new features—the first two are performance improvements with no driver API changes. The next three are related to a new idea, “MongoDB sessions”, and for dessert we’ll have the new Notification API.

Wire Protocol Compression

Since 3.4, MongoDB has used wire protocol compression for traffic between servers. This is especially important for secondaries streaming the primary’s oplog: we found that oplog data can be compressed 20x, allowing secondaries to replicate four times faster in certain scenarios. The server uses the Snappy algorithm, which is a good tradeoff between speed and compression.

In 3.6 we want drivers to compress their conversations with the server, too. Some drivers can implement Snappy, but since zLib is more widely available we’ve added it as an alternative. When the driver and server connect they negotiate a shared compression format. (If you’re a security wonk and you know about CRIME, rest easy: we never compress messages that include the username or password hash.)

In the past, we’ve seen that the network is the bottleneck for some applications running on bandwidth-constrained machines, such as small EC2 instances or machines talking to very distant MongoDB servers. Compressing traffic between the client and server removes that bottleneck.

OP_MSG

This is feature #2. We’re introducing a new wire protocol message called OP_MSG. This will be a modern, high-performance replacement for all our messy old wire protocol. To explain our motive, let’s review the history.

Ye Olde Wire Protocol

First, we had three kinds of write messages, all unacknowledged, and also a message for disposing a cursor:

Ye Olde Wire Protocol

There were also two kinds of messages that expected a reply from the server: one to create a cursor with a query, and another to get more results from the cursor:

Ye Olde Wire Protocol

Soon we added another kind of message: commands. We reused OP_QUERY, defining a command as a query on the fake $cmd collection. We also realized that our users wanted acknowledged writes, which we implemented as a write message immediately followed by a getLastError command. That brings us to this picture of Ye Olde Wire Protocol:

Ye Olde Wire Protocol

This protocol served us remarkably well for years: it implemented the features we wanted and it was quite fast. But its messiness made our lives hard when we wanted to innovate. We couldn’t add all the features we wanted to the wire protocol, so long as we were stuck with these old message types.

Middle Wire Protocol

In MongoDB 2.6 through 3.2 we unified all the message types. Now we have the Middle Wire Protocol, in which everything is a command:

Middle Wire Protocol

This is the wire protocol we use now. It’s uniform and flexible and has allowed us to rapidly add features, but it has some disadvantages:

  • Still uses silly OP_QUERY on $cmd collection
  • No unacknowledged writes
  • Less efficient

Why is it less efficient? Let’s see how a bulk insert is formatted in the Middle Wire Protocol:

Middle Wire Protocol

The “insert” command has a standard message header, followed by the command body as a single BSON document. In order to include a batch of documents in the command body, they must be subdocuments of a BSON array. This is a bit expensive for the client to assemble, and for the server to disassemble before it can insert the documents. The same goes for query replies: the server must assemble a BSON array of query results, and the client must disassemble it.

Compare this to Ye Olde Wire Protocol’s OP_INSERT message:

Ye Olde Wire Protocol

Ye Olde Wire Protocol’s bulk insert message was simple and efficient: just a message header followed by a stream of documents catenated end-to-end with no delimiter. How can we get back to this simpler time?

Modern Wire Protocol

This winter, we’ll release MongoDB 3.6 with a new wire protocol message:

Modern Wire Protocol

For the first time, both the client and the server use the same message type. The new OP_MSG will combine the best of the old and the middle wire protocols. It was mainly designed by Mathias Stearn, a MongoDB veteran who applied the lessons he learned from our past protocols to make a robust new format that will stand the test of time.

OP_MSG
  • Header: this is the same header as for all the other messages, with the message length and so on.
  • Flags: so far we’ve defined three flags: exhaustAllowed, moreToCome, and checkSumPresent. A client can set “exhaustAllowed” to tell the server that the client supports exhaust cursors. The client sets “moreToCome” for unacknowledged writes: the server knows the client will send more messages without expecting a server reply. A server message with “moreToCome” is an exhaust cursor. And finally, “checkSumPresent” is set if the last section is a checksum, so the receiver knows whether it should checksum the message as it reads it, in order to compare with the checksum at the end.
  • Body: this is the standard command document, like {"insert": "collection"}, plus write concern and other command arguments. It does not have to contain a BSON array of documents, however.
  • Document Stream: the big win comes here. This is the same simple format as Ye Olde Wire Protocol, which is very efficient to assemble and disassemble.
  • Checksum: An optional CRC-32C checksum. We’ve seen cases where bad hardware can corrupt documents on the wire, despite the parity bits at various layers of the network stack. Checksumming in the MongoDB protocol will protect you from corruption at no significant performance cost.

OP_MSG is the one message type to rule them all. We’ll implement it in stages. For 3.6, drivers will use document streams for efficient bulk writes. In subsequent releases the server will use document streams for query results, and we’ll add exhaust cursors, checksums, and unacknowledged writes. OP_MSG is extensible, so it will probably be our last big protocol change: all future changes to the protocol will work within the OP_MSG framework.

You’ll get faster network communications from compression and OP_MSG as soon as you upgrade your driver and server, without any code changes. The remaining four features, however, do require new code.

Sessions

The new versions of all our drivers will introduce “sessions”, an idea that MongoDB drivers have never had before. Here’s how you’ll use a session in PyMongo:

client = MongoClient('mongodb://srv1,srv2,srv3/?replicaSet=rs') # Start a session. with client.start_session() as session: collection = session.my_db.my_collection

Now, instead of getting your database and collection objects from the MongoClient, you can choose to get them from a session instead. So far I haven’t shown you anything useful, though. Let’s look at two new features that make use of sessions.

Retryable Writes

Currently, statements like this are very convenient with MongoDB but they’re a bit unsafe:

collection.update({'_id': 1}, {'$inc': {'x': 1}})

Here’s the problem: what happens if your program tries to send this “update” message to the server, and it gets a network error before it can read the response? Perhaps there was a brief network glitch, or perhaps the primary stepped down. It’s not always possible to know whether the server received the update before the network error or not, so you can’t safely retry it; you risk executing the update twice.

Last year at MongoDB World, I gave a 35-minute talk about how to make your operations idempotent so they could be safely retried, and I wrote a 3000-word article. All that is about to be obsolete. In MongoDB 3.6, the most common write messages will be retryable, safely and automatically.

# Start a session for retryable writes. with client.start_session(retry_writes=True) as session: collection = session.my_db.my_collection result = collection.insert_one({'_id': 1, 'n': 0}) result = collection.update_one({'_id': 1}, {'$inc': {'n': 1}}) result = collection.replace_one({'_id': 1}, {'n': 42}) result = collection.delete_one({'_id': 1})

If any of these write operations fails from a network error, the driver automatically retries it once. This is safe to do now, because MongoDB 3.6 stores an operation ID and the outcome of any write in a session. If the driver sends the same operation twice, the server ignores the second operation, but replies with the stored outcome of the first attempt. This means we can do an update like $inc exactly once, even if there’s a flaky network or a primary stepdown.

Most of the write methods you use day-to-day will be retryable in 3.6, like the CRUD methods above or the three findAndModify wrappers:

# Start a session for retryable writes. with client.start_session(retry_writes=True) as session: collection = session.my_db.my_collection old_doc = collection.find_one_and_update({'_id': 2}, {'$inc': {'n': 1}}) old_doc = collection.find_one_and_replace({'_id': 2}, {'n': 42}) old_doc = collection.find_one_and_delete({'_id': 2})

Operations that affect many documents, like update_many or delete_many, won’t be retryable for a while. But bulk inserts will be retryable in 3.6:

# Start a session for retryable writes. with client.start_session(retry_writes=True) as session: collection = session.my_db.my_collection # insert_many is ok if "ordered" is True, the default. result = collection.insert_many([{'_id': 1}, {'_id': 2}]) Causally Consistent Reads

Besides retryable writes, sessions also allow “causally consistent” reads: mainly, this means you can reliably read your writes and do monotonic reads, even when reading from secondaries.

# Start a session for causally consistent reads. with client.start_session(causally_consistent_reads=True): collection = session.my_db.my_collection result = collection.insert_one({'_id': 3}) secondary_collection = session.my_db.get_collection( 'my_collection', read_preference=ReadPreference.SECONDARY) # Guaranteed to be available on secondary, # may block until then. doc = secondary_collection.find_one({'_id': 3})

Right now, you can’t guarantee that between the time your program writes to the primary and when it reads from a secondary, that the secondary has replicated the write. It’s unpredictable whether querying the secondary will return fresh data or stale, so programs that need this guarantee must only read from the primary. Furthermore, if you spread your reads among secondaries, then results from different secondaries might jump back and forth in time.

In 3.6, a causally consistent session will let you read your writes and guarantee monotonically increasing reads from secondaries. It even works in sharded clusters!

The design, by Misha Tyulenev, Randolph Tan, and Andy Schwerin, uses a Lamport Clock to partially order events across all servers in a sharded cluster. Whenever the client sends a write operation to a server, the server notes the Lamport Clock value when the write was executed, and returns that value to the client. Then, if the client’s next message is a query, it asks the server to return query data that is causally after that Lamport Clock value:

Causal ordering of events by Lamport Clock values

If you query a secondary that hasn’t yet caught up to that point in time, according to the Lamport Clock, then your query blocks until the secondary replicates to that point. Yes, I know that in the diagram above we call the same value either “logicalTime”, “operationTime”, or “clusterTime” depending on the context. It’s complicated. Here’s the gist: this feature ensures that you can read your writes when reading from secondaries, and that each secondary query returns data causally after the previous one.

Notification API

Now it’s time for dessert! MongoDB 3.6 will have a new event notification API. You can subscribe to changes in a collection or a database, and you can filter or transform the events using an aggregation pipeline before you process them in your program:

cursor = client.my_db.my_collection.changes([ {'$match': { 'operationType': {'$in': ['insert', 'replace']} }}, {'$match': { 'newDocument.n': {'$gte': 1} }} ]) # Loops forever. for change in cursor: print(change['newDocument'])

This code shows how you’ll use the API in PyMongo. The collection object will have a new method, “changes”, which takes an aggregation pipeline. In this example we listen for all inserts and replaces in a collection, and filter these events to include only documents whose “n” field is greater than or equal to 1. The result of the “changes” method is a cursor that emits changes as they occur.

In the past we’ve seen that MongoDB users deploy a Redis server on the side, just for the sake of event notifications, or they use Meteor to get notifications from MongoDB. In 3.6, these notifications will be a MongoDB builtin feature. If you were maintaining both Redis and MongoDB, soon MongoDB alone will meet your needs. And if you want event notifications but you aren’t using Javascript and don’t want to rely on Meteor, you can write your own event system in any language without reinventing Meteor.

Event notifications have two main uses. First, if you’re keeping a second system like Lucene synchronized with MongoDB, you don’t have to tail the oplog anymore. The event notification system is a well-designed API that lets you subscribe to changes which you can apply to your second system. The other use is for collaboration: when one user changes a shared piece of data, you want to update other users’ views. The new event notification API will make collaborative applications easy to build.

I’m a coder, not a bullshitter, so I wouldn’t say this if it weren’t true: MongoDB 3.6 is the most significant advance in client-server features since I started working here more than five years ago. Compression will remove bottlenecks for bandwidth-limited systems, and OP_MSG paves the way for much more efficient queries and bulk operations. Retryable writes and causally consistent reads add guarantees that MongoDB users have long wished for. And the new event notification API makes MongoDB a natural platform for collaborative applications.

Categories: FLOSS Project Planets

eGenix.com: Python Meeting Düsseldorf - 2017-06-28

Planet Python - Thu, 2017-06-22 04:00

The following text is in German, since we're announcing a regional user group meeting in Düsseldorf, Germany.

Ankündigung

Das nächste Python Meeting Düsseldorf findet an folgendem Termin statt:

28.06.2017, 18:00 Uhr
Raum 1, 2.OG im Bürgerhaus Stadtteilzentrum Bilk
Düsseldorfer Arcaden, Bachstr. 145, 40217 Düsseldorf


Neuigkeiten Bereits angemeldete Vorträge

Matthias Endler
        "Grumpy - Python to Go source code transcompiler and runtime"

Tom Engemann
        "BeautifulSoup als Test framework für HTML"

Jochen Wersdörfer
        "Machine Learning: Kategorisierung von FAQs"

Linus Deike
        "Einführung in Machine Learning: Qualitätsprognose aus Sensordaten erstellen"

Andreas Bresser
        "Bilderkennung mit OpenCV"

Philipp v.d. Bussche & Marc-Andre Lemburg
        "Telegram Bot als Twitter Interface: TwitterBot"

Weitere Vorträge können gerne noch angemeldet werden. Bei Interesse, bitte unter info@pyddf.de melden. Startzeit und Ort

Wir treffen uns um 18:00 Uhr im Bürgerhaus in den Düsseldorfer Arcaden.

Das Bürgerhaus teilt sich den Eingang mit dem Schwimmbad und befindet sich an der Seite der Tiefgarageneinfahrt der Düsseldorfer Arcaden.

Über dem Eingang steht ein großes "Schwimm’ in Bilk" Logo. Hinter der Tür direkt links zu den zwei Aufzügen, dann in den 2. Stock hochfahren. Der Eingang zum Raum 1 liegt direkt links, wenn man aus dem Aufzug kommt.

>>> Eingang in Google Street View

Einleitung

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:

Programm

Das Python Meeting Düsseldorf nutzt eine Mischung aus (Lightning) Talks und offener Diskussion.

Vorträge können vorher angemeldet werden, oder auch spontan während des Treffens eingebracht werden. Ein Beamer mit XGA Auflösung steht zur Verfügung.

(Lightning) Talk Anmeldung bitte formlos per EMail an info@pyddf.de

Kostenbeteiligung

Das Python Meeting Düsseldorf wird von Python Nutzern für Python Nutzer veranstaltet.

Da Tagungsraum, Beamer, Internet und Getränke Kosten produzieren, bitten wir die Teilnehmer um einen Beitrag in Höhe von EUR 10,00 inkl. 19% Mwst. Schüler und Studenten zahlen EUR 5,00 inkl. 19% Mwst.

Wir möchten alle Teilnehmer bitten, den Betrag in bar mitzubringen.

Anmeldung

Da wir nur für ca. 20 Personen Sitzplätze haben, möchten wir bitten, sich per EMail anzumelden. Damit wird keine Verpflichtung eingegangen. Es erleichtert uns allerdings die Planung.

Meeting Anmeldung bitte formlos per EMail an info@pyddf.de

Weitere Informationen

Weitere Informationen finden Sie auf der Webseite des Meetings:

              http://pyddf.de/

Viel Spaß !

Marc-Andre Lemburg, eGenix.com

Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: DrupalCon sessions about PHP

Planet Drupal - Thu, 2017-06-22 03:57
Last time, we gathered together DrupalCon Baltimore sessions about DevOps. Before that, we explored the area of Front End, Site Building, Drupal Showcase, Coding and Development, Project Management and Case Studies. And that was not our last stop. This time, we looked at sessions that were presented in the area of PHP. Advanced debugging techniques from Patrick Allaert This session was not about Xdebug. It was about tools that let you know what’s really happening in your PHP code. Tools like the phpdbg debugger, process tracing tools like strace, ltrace, the Linux inotify mechanism,… READ MORE
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppCCTZ 0.2.3 (and 0.2.2)

Planet Debian - Wed, 2017-06-21 21:06

A new minor version 0.2.3 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. The RcppCCTZ page has a few usage examples and details.

This version ensures that we set the TZDIR environment variable correctly on the old dreaded OS that does not come with proper timezone information---an issue which had come up while preparing the next (and awesome, trust me) release of nanotime. It also appears that I failed to blog about 0.2.2, another maintenance release, so changes for both are summarised next.

Changes in version 0.2.3 (2017-06-19)
  • On Windows, the TZDIR environment variable is now set in .onLoad()

  • Replaced init.c with registration code inside of RcppExports.cpp thanks to Rcpp 0.12.11.

Changes in version 0.2.2 (2017-04-20)
  • Synchronized with upstream CCTZ

  • The time_point object is instantiated explicitly for nanosecond use which appears to be required on macOS

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-06-21

Planet Apache - Wed, 2017-06-21 19:58
Categories: FLOSS Project Planets

myDropWizard.com: Drupal 6 not affected by SA-CORE-2017-003!

Planet Drupal - Wed, 2017-06-21 18:50

Today, there were Critical security releases for Drupal 7 & 8:

https://www.drupal.org/SA-CORE-2017-003

We received a couple e-mails asking if it affected Drupal 6, so I decided to post this short article to say:

Happily, Drupal 6 is not affected! :-)

Of the 3 vulnerabilities in that SA, the two Drupal 8 ones don't apply to Drupal 6: it doesn't have REST or YAML support.

We did extensive testing to see if the Drupal 7 one applied to Drupal 6, including, testing the 'upload' module (in Drupal 6 core) and with the contrib 'filefield' and 'webform' modules and couldn't reproduce the vulnerability.

(FYI, since we have access to the private Drupal security queue, we did our testing several months ago :-))

So, if you still use Drupal 6, you don't need to worry about a core update today!


Categories: FLOSS Project Planets

Joey Hess: DIY professional grade solar panel installation

Planet Debian - Wed, 2017-06-21 18:42

I've installed 1 kilowatt of solar panels on my roof, using professional grade eqipment. The four panels are Astronergy 260 watt panels, and they're mounted on IronRidge XR100 rails. Did it all myself, without help.

I had three goals for this install:

  1. Cheap but sturdy. Total cost will be under $2500. It would probably cost at least twice as much to get a professional install, and the pros might not even want to do such a small install.
  2. Learn the roof mount system. I want to be able to add more panels, remove panels when working on the roof, and understand everything.
  3. Make every day a sunny day. With my current solar panels, I get around 10x as much power on a sunny day as a cloudy day, and I have plenty of power on sunny days. So 10x the PV capacity should be a good amount of power all the time.

My main concerns were, would I be able to find the rafters when installing the rails, and would the 5x3 foot panels be too unweildly to get up on the roof by myself.

I was able to find the rafters, without needing a stud finder, after I removed the roof's vent caps, which exposed the rafters. The shingles were on straight enough that I could follow the lines down and drilled into the rafter on the first try every time. And I got the rails on spaced well and straight, although I could have spaced the FlashFeet out better (oops).

My drill ran out of juice half-way, and I had to hack it to recharge on solar power, but that's another story. Between the learning curve, a lot of careful measurement, not the greatest shoes for roofing, and waiting for recharging, it took two days to get the 8 FlashFeet installed and the rails mounted.

Taking a break from that and swimming in the river, I realized I should have been wearing my water shoes on the roof all along. Super soft and nubbly, they make me feel like a gecko up there! After recovering from an (unrelated) achilles tendon strain, I got the panels installed today.

Turns out they're not hard to handle on the roof by myself. Getting them up a ladder to the roof by yourself would normally be another story, but my house has a 2 foot step up from the back retaining wall to the roof, and even has a handy grip beam as you step up.

The last gotcha, which I luckily anticipated, is that panels will slide down off the rails before you can get them bolted down. This is where a second pair of hands would have been most useful. But, I macguyvered a solution, attaching temporary clamps before bringing a panel up, that stopped it sliding down while I was attaching it.

I also finished the outside wiring today. Including the one hack of this install so far. Since the local hardware store didn't have a suitable conduit to bring the cables off the roof, I cobbled one together from pipe, with foam inserts to prevent chafing.

While I have 1 kilowatt of power on my roof now, I won't be able to use it until next week. After ordering the upgrade, I realized that my old PWM charge controller would be able to handle less than half the power, and to get even that I would have needed to mount the fuse box near the top of the roof and run down a large and expensive low-voltage high-amperage cable, around OO AWG size. Instead, I'll be upgrading to a MPPT controller, and running a single 150 volt cable to it.

Then, since the MPPT controller can only handle 1 kilowatt when it's converting to 24 volts, not 12 volts, I'm gonna have to convert the entire house over from 12V DC to 24V DC, including changing all the light fixtures and rewiring the battery bank...

Categories: FLOSS Project Planets

Pilot a Submarine- The Submarine

Planet KDE - Wed, 2017-06-21 17:00

In this post, I am going to discuss about the working of a submarine and my thought process on implementing the three basic features of a submarine in the “Pilot a Submarine” activity for the Qt version of GCompris, which are:

  • The Engine
  • The Ballast tanks and
  • The Diving Planes
The Engine

The engine of most submarines are either nuclear powered or diesel-electric engines, which are used to drive an electric motor which in turn, powers the submarine propellers. In this implementation, we will have two buttons one for increasing and another for decreasing the power generated by the submarine.

Ballast Tanks

The Ballast Tanks are the spaces in the submarine that can either be filled with water or air. It helps the submarine to dive and resurface on the water, using the concept of buouyancy. If the tanks are filled with water, the submarine dives underwater and if they are filled with air, it resurfaces on the surface of the water

Diving Planes

Once underwater, the diving planes of a submarine helps to accurately control the depth of the submarine. These are very similar to the fins present in the bodies of sharks, which helps them to swim and dive. When the planes are pointed downwards, the water flowing above the planes generate more pressure on the top surface than that on the bottom surface, forcing the submarine to dive deeper. This allows the driver to control the depth and the angle of the submarine.

Implementation

In this section I will be going through how I implemented the submarine using QML. For handling physics, I used Box2D.

The Submarine

The submarine is an QML Item element, designed as follows:

Item { id: submarine z: 1 property point initialPosition: Qt.point(0,0) property bool isHit: false property int terminalVelocityIndex: 100 property int resetVerticalSpeed: 500 /* Maximum depth the submarine can dive when ballast tank is full */ property real maximumDepthOnFullTanks: (background.height * 0.6) / 2 /* Engine properties */ property point velocity property int maximumXVelocity: 5 /* Wings property */ property int wingsAngle property int initialWingsAngle: 0 property int maxWingsAngle: 2 property int minWingsAngle: -2 function destroySubmarine() { isHit = true submarineImage.broken() } function resetSubmarine() { isHit = false submarineImage.reset() leftBallastTank.resetBallastTanks() rightBallastTank.resetBallastTanks() centralBallastTank.resetBallastTanks() x = initialPosition.x y = initialPosition.y velocity = Qt.point(0,0) wingsAngle = initialWingsAngle } function increaseHorizontalVelocity(amt) { if (submarine.velocity.x + amt <= submarine.maximumXVelocity) { submarine.velocity.x += amt } } function decreaseHorizontalVelocity(amt) { if (submarine.velocity.x - amt >= 0) { submarine.velocity.x -= amt } } function increaseWingsAngle(amt) { if (wingsAngle + amt <= maxWingsAngle) { wingsAngle += amt } else { wingsAngle = maxWingsAngle } } function decreaseWingsAngle(amt) { if (wingsAngle - amt >= minWingsAngle) { wingsAngle -= amt } else { wingsAngle = minWingsAngle } } function changeVerticalVelocity() { /* * Movement due to planes * Movement is affected only when the submarine is moving forward * When the submarine is on the surface, the planes cannot be used */ if (submarineImage.y > 0) { submarine.velocity.y = (submarine.velocity.x) > 0 ? wingsAngle : 0 } else { submarine.velocity.y = 0 } /* Movement due to Ballast tanks */ if (wingsAngle == 0 || submarine.velocity.x == 0) { var yPosition = submarineImage.currentWaterLevel / submarineImage.totalWaterLevel * submarine.maximumDepthOnFullTanks speed.duration = submarine.terminalVelocityIndex * Math.abs(submarineImage.y - yPosition) // terminal velocity submarineImage.y = yPosition } } BallastTank { id: leftBallastTank initialWaterLevel: 0 maxWaterLevel: 500 } BallastTank { id: rightBallastTank initialWaterLevel: 0 maxWaterLevel: 500 } BallastTank { id: centralBallastTank initialWaterLevel: 0 maxWaterLevel: 500 } Image { id: submarineImage source: url + "submarine.png" property int currentWaterLevel: bar.level < 7 ? centralBallastTank.waterLevel : leftBallastTank.waterLevel + centralBallastTank.waterLevel + rightBallastTank.waterLevel property int totalWaterLevel: bar.level < 7 ? centralBallastTank.maxWaterLevel : leftBallastTank.maxWaterLevel + centralBallastTank.maxWaterLevel + rightBallastTank.maxWaterLevel width: background.width / 9 height: background.height / 9 function broken() { source = url + "submarine-broken.png" } function reset() { source = url + "submarine.png" speed.duration = submarine.resetVerticalSpeed x = submarine.initialPosition.x y = submarine.initialPosition.y } Behavior on y { NumberAnimation { id: speed duration: 500 } } onXChanged: { if (submarineImage.x >= background.width) { Activity.finishLevel(true) } } } Body { id: submarineBody target: submarineImage bodyType: Body.Dynamic fixedRotation: true linearDamping: 0 linearVelocity: submarine.isHit ? Qt.point(0,0) : submarine.velocity fixtures: Box { id: submarineFixer width: submarineImage.width height: submarineImage.height categories: items.submarineCategory collidesWith: Fixture.All density: 1 friction: 0 restitution: 0 onBeginContact: { var collidedObject = other.getBody().target if (collidedObject == whale) { whale.hit() } if (collidedObject == crown) { crown.captureCrown() } else { Activity.finishLevel(false) } } } } Timer { id: updateVerticalVelocity interval: 50 running: true repeat: true onTriggered: submarine.changeVerticalVelocity() } }

The Item is a parent object to hold all the different components of the submarine (the Image BallastTank and the Box2D component). It also contains the functions and the variables that are global to the submarine.

The Engine

The engine is a very straightforward implementation via the linearVelocity component of the Box2D element. We have two variables global to the submarine for handling the engine component, defined as follows:

property point velocity property int maximumXVelocity: 5

which are pretty much self-explanatory, the velocity holds the current velocity of the submarine, both horizontal and vertical and the maximumXVelocity holds the maximum horizontal speed the submarine can achieve.

For increasing or decreasing the velocity of the submarine, we have two functions global to the submarine, as follows:

function increaseHorizontalVelocity(amt) { if (submarine.velocity.x + amt <= submarine.maximumXVelocity) { submarine.velocity.x += amt } } function decreaseHorizontalVelocity(amt) { if (submarine.velocity.x - amt >= 0) { submarine.velocity.x -= amt } }

which essentially gets the amount by which the velocity.x component needs to be increased or decreased, checks whether it crosses the range or not, and makes the necessary changes likewise.

The actual applying of the velocity is very straightforward, which takes place in the Body component of the submarine as follows:

Body { ... linearVelocity: submarine.isHit ? Qt.point(0,0) : submarine.velocity ...

The submarine.isHit component, as the name suggests holds whether the submarine is hit by any object or not (except the pickups). If so, the velocity is reset to (0,0)

Thus, for increasing or decreasing the engine power, we just have to call one of the two functions anywhere from the code:

submarine.increaseHorizontalVelocity(1); /* For increasing H velocity */ submarine.decreaseHorizontalVelocity(1); /* For decreasing H velocity */ The Ballast Tanks

The Ballast Tanks are implemented separately in BallastTank.qml, since it will be implemented more that once. It looks like the following:

Item { property int initialWaterLevel property int waterLevel: 0 property int maxWaterLevel property int waterRate: 10 property bool waterFilling: false property bool waterFlushing: false function fillBallastTanks() { waterFilling = !waterFilling if (waterFilling) { fillBallastTanks.start() } else { fillBallastTanks.stop() } } function flushBallastTanks() { waterFlushing = !waterFlushing if (waterFlushing) { flushBallastTanks.start() } else { flushBallastTanks.stop() } } function updateWaterLevel(isInflow) { if (isInflow) { if (waterLevel < maxWaterLevel) { waterLevel += waterRate } } else { if (waterLevel > 0) { waterLevel -= waterRate } } if (waterLevel > maxWaterLevel) { waterLevel = maxWaterLevel } if (waterLevel < 0) { waterLevel = 0 } } function resetBallastTanks() { waterFilling = false waterFlushing = false waterLevel = initialWaterLevel fillBallastTanks.stop() flushBallastTanks.stop() } Timer { id: fillBallastTanks interval: 500 running: false repeat: true onTriggered: updateWaterLevel(true) } Timer { id: flushBallastTanks interval: 500 running: false repeat: true onTriggered: updateWaterLevel(false) } }

What they essentially does is:

  • fillBallastTanks: Fills up the Ballast tanks upto maxWaterLevel. Sets the flag waterFilling to true if the Ballast is to be filled with water, and the timer fillBallastTanks is set to start(), which will increase the water level in the tank after every 500 millisecond.
  • flushBallastTanks: Flushes the Ballast tanks down to 0. Sets the flag waterFlushing to true if the Ballast is to be flushed out of water, and the timer flushBallastTanks is set to start(), which will decrease the water level in the tank after every 500 millisecond.
  • resetBallastTanks: Resets the water level in the ballast tanks to it’s initial values

In the Submarine Item, we just use three instances of the BallastTank object, for left, right and central ballast tanks, setting up it’s initial and maximum water level.

BallastTank { id: leftBallastTank initialWaterLevel: 0 maxWaterLevel: 500 } BallastTank { id: rightBallastTank initialWaterLevel: 0 maxWaterLevel: 500 } BallastTank { id: centralBallastTank initialWaterLevel: 0 maxWaterLevel: 500 }

For filling up or flushing the ballast tanks (centralBallastTank in this case), we just have two call either of the following two functions:

centralBallastTank.fillBallastTanks() /* For filling */ centralBallastTank.flushBallastTanks() /* For flushing */

I will be discussing about how the depth is maintained using the ballast tanks in the next section.

The Diving Planes

The diving planes will be used to control the depth of the submarine once it is moving underwater. Keeping that in mind, along with the fact that it needs to be effectively integrated with the ballast tanks. This is implemented in the changeVerticalVelocity() function, which is discussed as follows:

/* * Movement due to planes * Movement is affected only when the submarine is moving forward * When the submarine is on the surface, the planes cannot be used */ if (submarineImage.y > 0) { submarine.velocity.y = (submarine.velocity.x) > 0 ? wingsAngle : 0 } else { submarine.velocity.y = 0 }

However, under the following conditions:

  • the angle of the planes is reduced to 0
  • the horizontal velocity of the submarine is 0,

the ballast tanks will take over. Which is implemented as:

/* Movement due to Ballast tanks */ if (wingsAngle == 0 || submarine.velocity.x == 0) { var yPosition = submarineImage.currentWaterLevel / submarineImage.totalWaterLevel * submarine.maximumDepthOnFullTanks speed.duration = submarine.terminalVelocityIndex * Math.abs(submarineImage.y - yPosition) // terminal velocity submarineImage.y = yPosition }

yPosition calculates how much percentage of the tank is filled with water, and likewise it determines the depth to which it will dive. The speed.duration is the duration of the transition animation, and the duration depends directly on how much the submarine will have to cover up along the Y axis, to avoid a steep rise or fall of the submarine.

For increasing or decreasing the angle of the diving planes, we just need to call either of the following two functions:

submarine.increaseWinglsAngle(1) /* For increasing */ submarine.decreaseWingsAngle(1) /* For decerasing */

That’s it for now! The two major goals to be completed next are the rotation of the submarine (in case more than one tanks are used and they are unequally filled up) and the UI for controlling the submarine. Will provide an update on it once it is completed.

Categories: FLOSS Project Planets

myDropWizard.com: Drupal 6 security update for Search 404

Planet Drupal - Wed, 2017-06-21 16:35

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Moderately Critical security release for the Search 404 module to fix an Cross Site Scripting (XSS) vulnerability.

From the security advisory for Drupal 7:

The Search 404 module enables you to redirect 404 pages to a search page on the site for the keywords in the url that was not found.

The module did not filter administrator-provided text before displaying it to the user on the 404 page creating a Cross Site Scripting (XSS) vulnerability.

This vulnerability is mitigated by the fact that an attacker must have a role with the permission "administer search".

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the Site Verify module, we recommend you update immediately.

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Categories: FLOSS Project Planets

Friday Free Software Directory IRC meetup: June 23rd starting at 12:00 p.m. EDT/16:00 UTC

FSF Blogs - Wed, 2017-06-21 15:45

Participate in supporting the Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the Directory has been and continues to be a great resource to the world for over a decade now, it has the potential of being a resource of even greater value. But it needs your help!

This week we are kicking off summer school by focusing on educational software. While summer is just kicking off with fun and freedom for most students, others are buckling down for some extra credits to get a jump start on the next school year. We want to help them have all the tools they need to succeed by updating current entries on educational software, as well as adding new entries to the category as well.

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

Categories: FLOSS Project Planets

Free Software Directory meeting recap for June 16th, 2017

FSF Blogs - Wed, 2017-06-21 15:35

Every week free software activists from around the world come together in #fsf on irc.freenode.org to help improve the Free Software Directory. This recaps the work we accomplished at the Friday, June 16th, 2017 meeting.

This past week we remembered the 9th U.S. Circuit Court of Appeals stating that "space-shifting" was acceptable. As a result, the theme of the Directory meeting this week was audio/video manipulation software and we spent some time updating these programs. People also popped in to discuss the post about Mastodon that was published the past week by geographic. Part of this included talking about hosts for this decentralized federated microblogging platform as well as configuration issues.

If you would like to help update the Directory, meet with us every Friday in #fsf on irc.freenode.org from 12 p.m. to 3 p.m. EDT (16:00 to 19:00 UTC).

Categories: FLOSS Project Planets

GSoC 2017 – A New Journey

Planet KDE - Wed, 2017-06-21 14:05

It is a long time since I posted on my blog and frankly, i missed it. I’ve been busy with school: courses, tones of homework, projects and presentations.

Since last year i had a great experience with GCompris and KDE in general, i decided to apply in this year’s GSoC as well, only this time, i chose another project from KDE: Minuet.

Minuet is part of KDE-edu and its goal is helping teachers and students both novice and experienced teach and respectively learn and exercise their music skills. It is primarily focused on ear-training exercises and other areas will soon be available.

Minuet includes a virtual piano keyboard, displayed at the bottom of the screen, on which users can visualize the exercises. Using a piano keyboard is a good starting point for anyone who wants to learn the basics of musical theory: intervals, chords, scales etc. Minuet is currently based on the piano keyboard for all its ear training exercises. While this is a great feature, some may find it not quite suitable to their musical instrument.

 

My project aims to deliver to the user a framework which will support the implementation of multiple instrument views as Minuet plugins. Furthermore, apart from the piano keyboard, I will implement another instrument for playing the exercise questions and user’s answers.

This mechanism should allow new instruments to be integrated as Minuet plugins. After downloading the preferred instrument plugin, the user would then be able to switch between instruments. It will allow him to enhance his musical knowledge by training his skills using that particular instrument.

At the end of summer, I intend to have changed the current architecture to allow multiple-instrument visualization framework and refactor the piano keyboard view as a separate plugin. I also intend to have implemented a plugin for at least one new instrument: a guitar.

A mock up on the new guitar is shown below:

I hope it will be a great summer for me, my mentor and the users of Minuet, whom i want to offer a better experience by using my work.


Categories: FLOSS Project Planets

Continuum Analytics News: It’s Getting Hot, Hot, Hot: Four Industries Turning Up The Data Science Heat

Planet Python - Wed, 2017-06-21 13:56
Company Blog Wednesday, June 21, 2017 Christine Doig Sr. Data Scientist, Product Manager

Summer 2017 has officially begun. As temperatures continue to rise, so does the use of data science across dozens of industries. In fact, IBM predicts the demand for data scientists will increase by 28 percent in just three short years, and our own survey recently revealed that 96 percent of company executives conclude data science is critical to business success. While it’s clear that health care providers, financial institutions and retail organizations are harnessing the growing power of data science, it’s time for more industries to turn up the data science heat. We take a peek below at some of the up and comers.  
 
Aviation and Aerospace
As data science continues to reach for the sky, it’s only fitting that the aviation industry is also on track to leverage this revolutionary technology. Airlines and passengers generate an abundance of data everyday, but are not currently harnessing the full potential of this information. Through advanced analytics and artificial intelligence driven by data science, fuel consumption, flight routes and air congestion could be optimized to improve the overall flight experience. What’s more, technology fueled by data science could help aviation proactively avoid some of the delays and inefficiencies that burden both staff and passengers—airlines just need to take a chance and fly with it! 

The Data Science Revolution Transforming #Aviation via @forbes http://owy.mn/2rnTndz

— Olive Wyman (@oliverwyman) June 17, 2017

Cybersecurity   
In addition to aviation, cybersecurity has become an increasingly hot topic during the past few years. The global cost of handling cyberattacks is expected to rise from $400 billion in 2015 to $2.1 trillion by 2019, but implementing technology driven by data science can help secure business data and reduce these attacks. By focusing on the abnormalities, using all available data and automating whenever possible, companies will have a better chance at standing up to threatening attacks. Not to mention, artificial intelligence software is already being used to defend cyber infrastructure. 
  
Construction
While improving data security is essential, the construction industry is another space that should take advantage of data science tools to improve business outcomes. As an industry that has long resisted change, some companies are now turning to data science technology to manage large teams, improve efficiency in the building process and reduce project delivery time, ultimately increasing profit margins. By embracing data analytics and these new technologies, the construction industry will also have more room to successfully innovate. 
 
Ecology
From aviation to cybersecurity to construction, it’s clear that product-focused industries are on track to leverage data science. But what about the more natural side of things? One example suggests ecologists can learn more about ocean ecosystems through the use of technology driven by data science. Through coding and the use of other data science tools, these environmental scientists found they could conduct better, more effective oceanic research in significantly less time. Our hope is for other scientists to continue these methods and unearth more pivotal information about our planet. 
 
So there you have it. Four industries who are beginning to harness the power of data science to help transform business processes, drive innovation and ultimately change the world. Who will the next four be? 

 

 

Categories: FLOSS Project Planets

Facebook Events in KOrganizer

Planet KDE - Wed, 2017-06-21 13:44

Sounds like déjà vu? You are right! We used to have Facebook Event sync in KOrganizer back in KDE 4 days thanks to Martin Klapetek. The Facebook Akonadi resource, unfortunately, did not survive through Facebook API changes and our switch to KF5/Qt5.

I’m using a Facebook event sync app on my Android phone, which is very convenient as I get to see all events I am attending, interested in or just invited to directly in my phone’s calendar and I can schedule my other events with those in mind. Now I finally grew tired of having to check my phone or Facebook whenever I wanted to schedule event through KOrganizer and I spent a few evenings writing a brand new Facebook Event resource.

Inspired by the Android app the new resource creates several calendars – for events you are attending, events you are interested in, events you have declined and invitations you have not responded to yet. You can configure if you want to receive reminders for each of those.

Additionally, the resource fetches a list of all your friend’s birthdays (at least of those who have their birthday visible to their friends) and puts them into a Birthday calendar. You can also configure reminders for those separately.

The Facebook Sync resource will be available in the next KDE Applications feature release in August.

Categories: FLOSS Project Planets

PyCharm: PyCharm 2017.2 EAP 4

Planet Python - Wed, 2017-06-21 13:00

The fourth early access program (EAP) version of PyCharm 2017.2 is now available! Go to our website to download it now.

New in this version:

  • Docker Compose support on Windows. If you’re using Docker on Windows, please try it out. To set up a Compose project, go to your project interpreter, Add Remote, choose the ‘Docker Compose’ interpreter type and get started. If you need some help getting started, read our blog post on Docker Compose for Linux and Mac and ignore the warnings about Windows.
  • Support for Azure databases, and Amazon Redshift. To try this out, open the database pane, use the green ‘+’ to add a data source, and choose the one you’re interested in.
  • We’ve also fixed a lot of bugs related to Django support, Jupyter notebooks, and code inspections.
  • And more! See the release notes for details

Please let us know how you like it! Users who actively report about their experiences with the EAP can win prizes in our EAP competition. To participate: just report your findings on YouTrack, and help us improve PyCharm.

To get all EAP builds as soon as we publish them, set your update channel to EAP (go to Help | Check for Updates, click the ‘Updates’ link, and then select ‘Early Access Program’ in the dropdown). If you’d like to keep all your JetBrains tools updates, try JetBrains Toolbox!

-PyCharm Team
The Drive to Develop

Categories: FLOSS Project Planets

Enthought: Enthought Announces Canopy 2.1: A Major Milestone Release for the Python Analysis Environment and Package Distribution

Planet Python - Wed, 2017-06-21 12:30
Python 3 and multi-environment support, new state of the art package dependency solver, and over 450 packages now available free for all users

Enthought is pleased to announce the release of Canopy 2.1, a significant feature release that includes Python 3 and multi-environment support, a new state of the art package dependency solver, and access to over 450 pre-built and tested scientific and analytic Python packages completely free for all users. We highly recommend that all current Canopy users upgrade to this new release.

Ready to dive in? Download Canopy 2.1 here.

For those currently familiar with Canopy, in this blog we’ll review the major new features in this exciting milestone release, and for those of you looking for a tool to improve your workflow with Python, or perhaps new to Python from a language like MATLAB or R, we’ll take you through the key reasons that scientists, engineers, data scientists, and analysts use Canopy to enable their work in Python.

First, let’s talk about the latest and greatest in Canopy 2.1!
  1. Support for Python 3 user environments: Canopy can now be installed with a Python 3.5 user environment. Users can benefit from all the Canopy features already available for Python 2.7 (syntax checking, debugging, etc.) in the new Python 3 environments. Python 3.6 is also available (and will be the standard Python 3 in Canopy 2.2).
  2. All 450+ Python 2 and Python 3 packages are now completely free for all users: Technical support, full installers with all packages for offline or shared installation, and the premium analysis environment features (graphical debugger and variable browser and Data Import Tool) remain subscriber-exclusive benefits. See subscription options here to take advantage of those benefits.
  3. Built in, state of the art dependency solver (EDM or Enthought Deployment Manager): the new EDM back end (which replaces the previous enpkg) provides additional features for robust package compatibility. EDM integrates a specialized dependency solver which automatically ensures you have a consistent package set after installation, removal, or upgrade of any packages.
  4. Environment bundles, which allow users to easily share environments directly with co-workers, or across various deployment solutions (such as the Enthought Deployment Server, continuous integration processes like Travis-CI and Appveyor, cloud solutions like AWS or Google Compute Engine, or deployment tools like Ansible or Docker). EDM environment bundles not only allow the user to replicate the set of installed dependencies but also support persistence for constraint modifiers, the list of manually installed packages, and the runtime version and implementation.
  5. Multi-environment support: with the addition of Python 3 environments and the new EDM back end, Canopy now also supports managing multiple Python environments from the user interface. You can easily switch between Python 2.7 and 3.5, or between multiple 2.7 or 3.5 environments. This is ideal especially for those migrating legacy code to Python 3, as it allows you to test as you transfer and also provides access to historical snapshots or libraries that aren’t yet available in Python 3.
Why Canopy is the Python platform of choice for scientists and engineers

Since 2001, Enthought has focused on making the scientific Python stack accessible and easy to use for both enterprises and individuals. For example, Enthought released the first scientific Python distribution in 2004, added robust and corporate support for NumPy on 64-bit Windows in 2011, and released Canopy 1.0 in 2013.

Since then, with its MATLAB-like experience, Canopy has enabled countless engineers, scientists and analysts to perform sophisticated analysis, build models, and create cutting-edge data science algorithms. Canopy’s all-in-one package distribution and analysis environment for Python has also been widely adopted in organizations who want to provide a single, unified platform that can be used by everyone from data analysts to software engineers.

Here are five of the top reasons that people choose Canopy as their tool for enabling data analysis, data modelling, and data visualization with Python:

1. Canopy provides a complete, self-contained installer that gets you up and running with Python and a library of scientific and analytic tools – fast

Canopy has been designed to provide a fast installation experience which not only installs the Canopy analysis environment but also the Python version of your choice (e.g. 2.7 or 3.5) and a core set of curated Python packages. The installation process can be executed in your home directory and does not require administrative privileges.

In just minutes, you’ll have a fully working Python environment with the primary tools for doing your work pre-installed: Jupyter, Matplotlib, NumPy and SciPy optimized with the latest MKL from Intel, Matplotlib, Scikit-learn, and Pandas, plus instant access to over 450 additional pre-built and tested scientific and analytic packages to customize your toolset.

No command line, no complex multi-stage setups! (although if you do prefer a flat, standalone command line interface for package and environment management, we offer that too via the EDM tool)

2. Access to a curated, quality assured set of packages managed through Canopy’s intuitive graphical package manager

The scientific Python ecosystem is gigantic and vibrant. Enthought is continuously updating its Enthought Python Distribution package set to provide the most recent “Enthought approved” versions of packages, with rigorous testing and quality assessment by our experts in the Python packaging ecosystem before release.

Our users can’t afford to take chances with the stability of their software and applications, and using Canopy as their gateway to the Python ecosystem helps take the risk out of the “wild west” of open source software. With more than 450 tested, pre-built and approved packages available in the Enthought Python Distribution, users can easily access both the most current stable version as well as historical versions of the libraries in the scientific Python stack.

Consistent with our focus on ease-of-use, Canopy provides a graphical package manager to easily search, install and remove packages from the user environment. You can also easily roll back to earlier versions of a package. The underlying EDM back end takes care of complex dependency management when installing, updating, and removing packages to ensure nothing breaks in the process.

3. Canopy is designed to be extensible for the enterprise

Canopy not only provides a consistent Python toolset for all 3 major operating systems and support for a wide variety of use cases (from data science to data analysis to modelling and even application development), but it is also extensible with other tools.

Canopy can easily be integrated with other software tools in use at enterprises, such with Excel via PyXLL or with LabVIEW from National Instruments using the Python Integration Toolkit for LabVIEW. The built-in Canopy Data Import Tool helps you automate your data ingestion steps and automatically import tabular data files such as CSVs into Pandas DataFrames.

But it doesn’t stop there. If an enterprise has Python embedded in another software application, Canopy can be directly connected to that application to provide coding and debugging capabilities. Canopy itself can even be customized or embedded to provide a sophisticated Python interface for your applications. Contact us to learn more about these options.

Finally, in addition to accessing the libraries in the Enthought Python Distribution from Canopy, users can use the same tools to share and deploy their own internal, private packages by adding the Enthought Deployment Server.  The Enthought Deployment Server also allows enterprises to have a private,  onsite copy of the full Enthought Python Distribution on their own approved servers and compliant with their existing security protocols.

 

5. Canopy’s straightforward analysis environment, specifically tailored to the needs and workflow of scientists, analysts, and engineers

Three integrated features of the Canopy analysis environment combine to create a powerful, yet streamlined platform: (1) a code editor, (2) an interactive graphical debugger with variable browser, and (3) an IPython window.

  • Canopy’s code editor comes with everything required to write analysis code, but without the burden of advanced development environments like PyCharm or Microsoft Visual Studio (although, if needed, other IDE’s can be configured to use the Canopy Python environment). With syntax highlighting, Python code auto-completion, and error checking, users can quickly interact with Python code, write and execute existing code.
  • Canopy’s interactive graphical debugger with variable browser helps you quickly find and fix code errors, understand and investigate code and data, and write new code more quickly.
  • The integrated IPython window lets you quickly test code, experiment with ideas and see the results of code run directly from the editor. Canopy also includes pre-configured Jupyter Notebook access.

Finally, access to package documentation at your fingertips in Canopy is a great benefit to faster coding. Canopy not only integrates online documentation and examples for many of the most used packages for data visualization, numerical analysis, machine learning, and more, but also let you easily extract and execute code from that documentation to get started working with starter code quickly.

We’re very excited for this major release and all of the new capabilities that it will enable for both individuals and enterprises, and encourage you to download or update to the new Canopy 2.1 today.

Have feedback on your experience with Canopy?

We’d love to hear about it! Contact the product development team at canopy.support@enthought.com.

Additional Resources:

Blog: New Year, New Enthought Products (Jan 2017)

Blog: Enthought Presents the Canopy Platform at the 2017 American Institute of Chemical Engineers (AIChE) Spring Meeting

Product pages:

The post Enthought Announces Canopy 2.1: A Major Milestone Release for the Python Analysis Environment and Package Distribution appeared first on Enthought Blog.

Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: week 112 in Stretch cycle

Planet Debian - Wed, 2017-06-21 12:27

Here's what happened in the Reproducible Builds effort between Sunday June 11 and Saturday June 17 2017:

Upcoming events Upstream patches and bugs filed Reviews of unreproducible packages

1 package review has been added, 19 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (1)
  • Edmund Grimley Evans (1)
diffoscope development tests.reproducible-builds.org

As you might have noticed, Debian stretch was released last week. Since then, Mattia and Holger renamed our testing suite to stretch and added a buster suite so that we keep our historic results for stretch visible and can continue our development work as usual. In this sense, happy hacking on buster; may it become the best Debian release ever and hopefully the first reproducible one!

  • Vagrant Cascadian:
  • Valerie Young: Add highlighting in navigation for the new nodes health pages.
  • Mattia Rizzolo:
    • Do not dump database ACL in the backups.
    • Deduplicate SSLCertificateFile directive into the common-directives-ssl macro
    • Apache: t.r-b.o: redirect /testing/ to /stretch/
    • db: s/testing/stretch/g
    • Start adding code to test buster...
  • Holger Levsen:
    • Update README.infrastructure to explain who has root access where.
    • reproducible_nodes_info.sh: correctly recognize zero builds per day.
    • Add build nodes health overview page, then split it in three: health overview, daily munin graphs and weekly munin graphs.
    • reproducible_worker.sh: improve handling of systemctl timeouts.
    • reproducible_build_service: sleep less and thus restart failed workers sooner.
    • Replace ftp.(de|uk|us).debian.org with deb.debian.org everywhere.
    • Performance page: also show local problems with _build_service.sh (which are autofixed after a maximum of 133.7 minutes).
    • Rename nodes_info job to html_nodes_info.
    • Add new node health check jobs, split off from maintenance jobs, run every 15 minutes.
      • Add two new checks: 1. for correct future (2019 is incorrect atm, and we sometimes got that). 2.) for writeable /tmp (sometimes happens on borked armhf nodes).
    • Add jobs for testing buster.
    • s/testing/stretch/g in all the code.
    • Finish the code to deal with buster.
    • Teach jessie and Ubuntu 16.04 how to debootstrap buster.

Axel Beckert is currently in the process of setting up eight LeMaker HiKey960 boards. These boards were sponsored by Hewlett Packard Enterprise and will be hosted by the SOSETH students association at ETH Zurich. Thanks to everyone involved here and also thanks to Martin Michlmayr and Steve Geary who initiated getting these boards to us.

Misc.

This week's edition was written by Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Categories: FLOSS Project Planets
Syndicate content