FLOSS Project Planets

Mike Hommey: Announcing git-cinnabar 0.4.0 release candidate

Planet Debian - Mon, 2016-11-28 19:18

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0b3?
  • Updated git to 2.10.2 for cinnabar-helper.
  • Added a new git cinnabar download command to download a helper on platforms where one is available.
  • Fixed some corner cases with pack windows in the helper. This prevented cloning mozilla-central with the helper.
  • Fixed bundle2 support that broke cloning from a mercurial 4.0 server in some cases.
  • Fixed some corner cases involving empty files. This prevented cloning Mozilla’s stylo incubator repository.
  • Fixed some correctness issues in file parenting when pushing changesets pulled from one mercurial repository to another.
  • Various improvements to the rules to build the helper.
  • Experimental (and slow) support for pushing merges, with caveats. See issue #20 for details about the current status.

And since I realize I didn’t announce beta 3:

What’s new since 0.4.0b2?
  • Properly handle bundle2 errors, avoiding git to believe a push happened when it didn’t. (0.3.x is unaffected)
Categories: FLOSS Project Planets

Kdenlive’s first bug squashing day

Planet KDE - Mon, 2016-11-28 18:59

Kdenlive 16.12 will be released very soon, and we are trying to fix as many issues as possible. This is why we are organizing a Bug squashing day, this friday, 2nd of december 2016 between 9am and 5 pm (Central European Time – CET).

Kdenlive needs you

There are several ways you can help us improve this release, depending on your skills or interests. During the bug squashing day, Kdenlive developers will be reachable on IRC at freenode.net, channel #kdenlive to answer your questions. A collaborative notepad has also been created to coordinate the efforts.

If you have some interest / knowledge in coding:
You can download Kdenlive’s source code and find instructions on our wiki. We will also be available on friday on IRC to help you setup your development environment. You can then select an ‘easy bug‘ from the notepad list and then look at the code to try to fix it. Feel free to ask your questions on IRC, the developers will guide you through the process, so that you can get familiar with the parts of the code you will be looking at.

If you are a user and encounter a bug:
You can help us by testing the Kdenlive 16.12 RC version. Our easy to install AppImage and snap packages will be updated on the 1rst of december with the latest code (Ubuntu users can also use our PPA). This will allow you to install the latest version without messing with your system. You can then check if a bug is still there is the latest version, or let us know if it is fixed.

So feel free to join us this friday, this is your chance to help the world of free software video editing !

For the Kdenlive team,
Jean-Baptiste Mardelle

Categories: FLOSS Project Planets

Aten Design Group: Restricting Access to Drupal 8 Controllers

Planet Drupal - Mon, 2016-11-28 18:53

Controllers in Drupal 8 are the equivalent of hook_menu in Drupal 7. A controller lets you define a URL and what content or data should appear at that URL. If you’re like me, limiting access to my controllers is sometimes an afterthought. Limiting access is important because it defines who can and can’t see a page.

Controllers are defined in a YAML file called module_name.routing.yml. Access and permission rules are defined in the the module_name.routing.yml under _requirements. Most of the code examples will be from a module_name.routing.yml file added to my_module in the top level.

Note: There is a lot of existing documentation on how to create controllers in Drupal 8, so I won’t focus on that here.

I’ve outlined some of the most useful approaches for limiting access below. You can jump straight to the most relevant section using the following links: limit by permission, limit by role, limit by one-off custom code, limit by custom access service.

Limit by permission

In this case, a permission from the Drupal permissions page is given. Permissions can be found at /admin/people/permissions. Finding the exact permission name can be tricky. Look for module.permissions.yml files in the module providing the permission.

my_module.dashboard: path: 'dashboard' defaults: _controller: '\Drupal\my_module\Controller\DashboardController::content' _title: 'Dashboard' requirements: _permission: 'access content'

Key YAML definition:

_permission: 'THE PERMISSION NAME'

Limit by role

You can also limit access by role. This would be useful in cases where users of a specific role will be the only ones needing access to your controller. You can define user roles at /admin/people/roles.

my_module.dashboard: path: 'dashboard' defaults: _controller: '\Drupal\my_module\Controller\DashboardController::content' _title: 'Dashboard' requirements: _role: 'administrator'

Key YAML definition:

_role: 'THE ROLE NAME'

You can specify multiple roles using "," for AND and "+" for OR logic.

Limit by one-off custom code

In cases where you have custom access requirements, adding an access method to your controller might make sense. In this example, the page should not be viewed before a specified date.

my_module.dashboard: path: 'dashboard' defaults: _controller: '\Drupal\my_module\Controller\DashboardController::content' _title: 'Dashboard' requirements: _custom_access: '\Drupal\my_module\Controller\DashboardController::access

Key YAML definition:

_custom_access: '\Drupal\my_module\Controller\DashboardController::access

The access method in my controller would look like:

<?php namespace Drupal\my_module\Controller;   use Drupal\Core\Access\AccessResult; use Drupal\Core\Controller\ControllerBase;   /** * Defines the Dashboard controller. */ class DashboardController extends ControllerBase { {   /** * Returns content for this controller. */ public function content() { $build = []; return $build; }   /** * Checks access for this controller. */ public function access() { // Don’t allow access before Friday, November 25, 2016. $today = date("Y-m-d H:i:s"); $date = "2016-11-25 00:00:00"; if ($date < $today) { // Return 403 Access Denied page. return AccessResult::forbidden(); } return AccessResult::allowed(); } }

Limit by custom access service

This is similar to having an access method in your controller, but allows the code to be reused across many controllers. This is ideal when you are doing the same access check across many controllers.

my_module.dashboard: path: 'dashboard' defaults: _controller: '\Drupal\my_module\Controller\DashboardController::content' _title: 'Dashboard' requirements: _custom_access_check: 'TRUE'

Key YAML definition:

_custom_access_check: 'TRUE'

Proving the _custom_access_check service requires creating two files in my_module.

my_module/my_module.services.yml (defines the Access service and where to find our Access class)

services: my_module.custom_access_check: class: Drupal\my_module\Access\CustomAccessCheck arguments: ['@current_user'] tags: - { name: access_check, applies_to: _custom_access_check }


<?php namespace Drupal\my_module\Access;   use Drupal\Core\Access\AccessResult; use Drupal\Core\Routing\Access\AccessInterface; use Drupal\Core\Session\AccountInterface;   /** * Class CustomAccessCheck. * * @package Drupal\my_module\Access */ class CustomAccessCheck implements AccessInterface {   /** * A custom access check. * * @param \Drupal\Core\Session\AccountInterface $account * Run access checks for the logged in user. */ public function access(AccountInterface $account) { // User has a profile field defining their favorite color. if ($account->field_color->hasField() && !$account->field_color->isEmpty() && $account->field_color->getString() === 'blue') { // If the user's favorite color is blue, give them access. return AccessResult::allowed(); } return AccessResult::forbidden(); }   }

While the above covers some of the most useful ways to restrict access to a controller, there are additional options. Drupal.org has a couple of good resources including Structure of Routes and Access Checking on Routes.

Categories: FLOSS Project Planets

Reuven Lerner: The (lack of a) case against Python 3

Planet Python - Mon, 2016-11-28 17:54

A few days ago, well-known author and developer Zed Shaw wrote a blog post, “The Case Against Python 3.”   I have a huge amount of respect for Zed’s work, and his book (Learn Python the Hard Way) is one whose approach is similar to mine — so much so, that I often tell people who either are about to take my course to read it in preparation — and that people who want to practice more after finishing my course, should read it afterwards.

It was thus disappointing for me to see Zed’s post about Python 3, with which I disagree.

Let’s make it clear: About 90% of my work is as a Python trainer at various large companies; my classes range from “Python for non-programmers” and “Intro Python” to “Data science and machine learning in Python,” with a correspondingly wide range of backgrounds. I would estimate that at least 95% of the people I teach are using Python 2 in their work.

In my own development work, I switch back and forth between Python 2 and 3, depending on whether it’s for a client, for myself, and what I plan to do with it.

So I’m far from a die-hard “Python 3 or bust” person. I recognize that there are reasons to use either 2 or 3.  And I do think that if there’s a major issue in the Python world today, it’s in the world of 2 vs. 3.

But there’s a difference between recognizing a problem, and saying that Python 3 is a waste of time — or, as Zed is saying, that it’s a mistake to teach Python 3 to new developers today.  Moreover, I think that the reasons he gives aren’t very compelling, either for newcomers to programming in general, or to experienced programmers moving to Python.

Zed’s argument seems to boil down to:

  • Implementing Unicode in Python 3 has made things harder, and
  • The fact that you cannot run Python 2 programs in the Python 3 environment, but need to translate them semi-automatically with a combination of 2to3 and manual intervention is crazy and broken.

I think that the first is a bogus argument, and the second is overstating the issues by a lot.

As for Unicode: This was painful. It was going to painful no matter what.  Maybe the designers got some things wrong, but on the whole, Unicode works well (I think) in Python 3.

In my experience, 90% of programmers don’t need to think about Unicode, because so many programmers use ASCII in their work.  For them, Python 3 works just fine, no better (and no worse) than Python 2 on this front.

For people who do need Unicode, Python 3 isn’t perfect, but it’s far, far better than Python 2. And given that some huge proportion of the world doesn’t speak English, the notion that a modern language won’t natively support Unicode strings is just nonsense.

This does mean that code needs to be rewritten, and that people need to think more before using strings that contain Unicode.  Yes, those are problems.  And Zed points out some issues with the implementation that can be painful for people.

But again, the population that will be affected is the 10% who deal with Unicode.  That generally doesn’t include new developers — and if it does, everything is hard for them.  So the notion that Unicode problems making Python 3 impossible to use is just silly.  And the notion that Python can simply ignore Unicode needs, or treat non-English characters are a second thought, is laughable in the modern world.

The fact that you cannot run Python 2 programs in the Python 3 VM might have been foolish in hindsight.  But if the migration from Python 2 to 3 is slow now, imagine what would have happened if companies never needed to migrate?  Heck, that might still happen come 2020, when large companies don’t migrate.  I actually believe that large companies won’t ever translate their Python 2 code into Python 3.  It’s cheaper and easier for them to pay people to keep maintaining Python 2 code than to move mission-critical code to a new platform.  So new stuff will be in Python 3, and old stuff will be in Python 2.

I’m not a language designer, and I’m not sure how hard it would have been to allow both 2 and 3 to run on the same system. I’m guessing that it would have been hard, though, if only because it would have saved a great deal of pain and angst among Python developers — and I do think that the Python developers have gone out of their way to make the transition easier.

Let’s consider who this lack of v2 backward compatibility affects, and what a compatible VM might have meant to them:

  • For new developers using Python 3, it doesn’t matter.
  • For small (and individual) shops that have some software in Python 2 and want to move to 3, this is frustrating, but it’s doable to switch, albeit incrementally.  This switch wouldn’t have been necessary if the VM were multi-version capable.
  • For big shops, they won’t switch no matter what. They are fully invested in Python 3, and it’s going to be very hard to convince them to migrate their code — in 2016, in 2020, and in 2030.

(PS: I sense a business opportunity for consultants who will offer Python 2 maintenance support contracts starting in 2020.)

So the only losers here are legacy developers, who will need to switch in the coming three years.  That doesn’t sound so catastrophic to me, especially given how many new developers are learning Python 3, the growing library compatibility with 3, and the fact that 3 increasingly has features that people want. With libraries such as six, making your code run in both 2 and 3 isn’t so terrible; it’s not ideal, but it’s certainly possible.

One of Zed’s points strikes me as particularly silly: The lack of Python 3 adoption doesn’t mean that Python 3 is a failure.  It means that Python users have entrenched business interests, and would rather stick with something they know than upgrade to something they don’t.  This is a natural way to do things, and you see it all the time in the computer industry.  (Case in point: Airlines and banks, which run on mainframes with software from the 1970s and 1980s.)

Zed does have some fair points: Strings are more muddled than I’d like (with too many options for formatting, especially in the next release), and some of the core libraries do need to be updated and/or documented better. And maybe some of those error messages you get when mixing Unicode and bytestrings could be improved.

But to say that the entire language is a failure because you get weird results when combining a (Unicode) string and a bytestring using str.format… in my experience, if someone is doing such things, then they’re no longer a newcomer, and know how to deal with some of these issues.

Python 3 isn’t a failure, but it’s not a massive success, either.  I believe that the reasons for that are (1) the Python community is too nice, and has allowed people to delay upgrading, and (2) no one ever updates anything unless they have a super-compelling reason to do so and they can’t afford not to.  There is a growing number of super-compelling reasons, but many companies are still skeptical of the advantages of upgrading. I know of people who have upgraded to Python 3 for its async capabilities.

Could the Python community have handled the migration better? Undoubtedly. Would it be nice to have more, and better, translation tools?  Yes.  Is Unicode a bottomless pit of pain, no matter how you slice it, with Python 3’s implementation being a pretty good one, given the necessary trade-offs? Yes.

At the same time, Python 3 is growing in acceptance and usage. Oodles of universities new teach Python 3 as an introductory language, which means that in the coming years, a new generation of developers will graduate and expect/want to use Python 3. People in all sorts of fields are using Python, and many of them are switching to Python 3.

The changes are happening: Slowly, perhaps, but they are happening. And it turns out that Python 3 is just as friendly to newbies as Python 3 was. Which doesn’t mean that it’s wart-free, of course — but as time goes on, the intertia keeping people from upgrading will wane.

I doubt that we’ll ever see everyone in the Python world using Python 3. But to dismiss Python 3 as a grave error, and to say that it’ll never catch on, is far too sweeping, and ignores trends on the ground.

The post The (lack of a) case against Python 3 appeared first on Lerner Consulting Blog.

Categories: FLOSS Project Planets

Python Data: Getting the ‘next’ row of data in a pandas dataframe

Planet Python - Mon, 2016-11-28 16:15

I’m currently working with stock market trade data that is output from a backtesting engine (I’m working with backtrader currently).  The format of the ‘transcations’ data that is provided out of the backtesting engine is shown below.

amount price value date 2016-01-07 00:00:00+00:00 79.017119 195.33 -15434.413883 2016-09-07 00:00:00+00:00 -79.017119 218.84 17292.106354 2016-09-20 00:00:00+00:00 82.217609 214.41 -17628.277649 2016-11-16 00:00:00+00:00 -82.217609 217.56 17887.263119

The data provided gives four crucial pieces of information:

  • Date – The date of a transaction.
  • Amount – the number of shares purchased (positive number) or sold (negative number)  during the transaction.
  • Price – the price received or paid at the time of the sale.
  • Value – the cash value of the transaction.

Backtrader’s transactions dataframe is comprised of two rows make for one one transaction (the first is the ‘buy’ the second is the ‘sell).  For example, in the data above, the first two rows (Jan 7 2016 and Sept 7th 2016) are the ‘buy’ data and ‘sell’ data for one transaction. What I need to do with this data is transform it (using that term loosely) into one row of data for each transaction to store into  database for use in another analysis.

I could leave it in its current form, but I prefer to store transactions in one row when dealing with market backtests.

There are a few ways to attack this particular problem.  You could iterate over the dataframe and manually pick each row. That would be pretty straightforward, but not necessarily the best way.

While looking around the web for some pointers, I stumbled across this answer that does exactly what I need to do.   I added the following code to my script and — voila — I have my transactions transformed from two rows per transaction to one row.

trade_output=[] from itertools import tee, izip def next_row(iterable): a, b = tee(iterable) next(b, None) return izip(a, b) for (buy_index, buy_data), (sell_index, sell_data) in next_row(transactions.iterrows()): if buy_data['amount']>0: trade_output.append((buy_index.date(), buy_data["amount"], buy_data['price'],\ sell_index.date(), sell_data["amount"], sell_data['price']))

Note: In the above, I only want to build rows that start with a positive amount in the ‘amount’ column because amount in the first row of a transactions is always positive. I then append each transformed transaction into an array to be used for more analysis down at a later point in the script.

The post Getting the ‘next’ row of data in a pandas dataframe appeared first on Python Data.

Categories: FLOSS Project Planets

Drupal Console: Add DrupalConsole to a project using Acquia Lightning distribution

Planet Drupal - Mon, 2016-11-28 15:30
Lightning is a base distribution maintained by Acquia. In this short blog post you will learn how to fix the dependency conflicts when trying to add DrupalConsole to a project using the Lightning distribution.
Categories: FLOSS Project Planets

S. Lott: Handling Irregular File Formats

Planet Python - Mon, 2016-11-28 14:35
This is a common issue. We have a file which was printed for human consumption. Consequently, it has many different kinds of lines.

These are the two kinds of lines of interest:

900296268 4/9/16 Mobility, Data Mining and Privacy Expired

900295204 4/1/16 Pro .NET Best Practices

The first is a single physical line.  It has four data elements. The second is two physical lines. The first has three data elements.

There are a number of other noise lines in the file which must be filtered out.

The first "solution" pitched to me could be summarized with this:

Move "Expired" on a line by itself to the previous line

That was part of the email subject line. The body of the email was some whining about regular expressions. Which I mostly ignored. Multiline regular expressions are their own kind of challenge.

We (should) all know this: https://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/

Let's do this without regular expressions. There are two things we need to know. One is buffering, and the other is the best way to split each line. It turns out that there are spaces as well as tabs, and can can, by splitting on tabs, make a lot of progress.

Instead of the good approach, I'll pick the other approach that doesn't involve splitting on tabs.

Here's the simulated file, with data lightly redacted.

sample_text = '''
"Your eBooks"

Show 200

Page: 1

Order # Date Title Formats Status Download
xxx315605 9/30/16 R for Cloud Computing Available

xxx304790 6/21/16 Java XML and JSON Available
xxx304790 6/21/16 Accelerated DOM Scripting with Ajax, APIs, and Libraries Available

xxx291633 2/28/16 Practical Google Analytics and Google Tag Manager for Developers

It's not perfectly obvious (because of line wrapping) but there are three examples of the "all-complete-in-one-line" records. There's one example of the "two-lines" record.

Rather than mess with the file, we'll build a file-like object with our sample data.

import io
file_like_object = io.StringIO(sample_text)

I like this because it lets me write proper unit test cases.

The file has four kinds of lines:

  • Complete Records
  • Record Headers (without Available/Expired)
  • Record Trailers (only Available/Expired)
  • Noise

We'll create some decision rules for the two obvious kinds of file lines: complete records and trailers. We can deduce the headers based on a simple adjacency rule: they precede a trailer. The fourth kind of lines are those which are possible headers but are not immediately prior to a trailer.

def complete(words):
    return len(words) > 3 and words[-1] in ('Available', 'Expired')

def trailer(words):
    return len(words) == 1 and words[0] in ('Available', 'Expired')    

We can spot these two kinds of lines easily. The other kinds require a Buffered Generator.

def emit_clean(source):
    header = None
    for line in (line.strip() for line in source):
        words = [w.strip() for w in line.split()]
        if len(words) == 0: continue
        if complete(words):
            header = None
        elif trailer(words) and header:
            yield(header + '\t\t' + line)
            header = None
            # Possible header
            # print('??', line)
            header = line

The Buffered Generator is a way to implement a "look ahead one item" (LA1) algorithm. We do this by buffering rows. When we get to the next row we can use the buffered row and the current row to implement the look-ahead logic.

The actual implementation uses a look-behind buffer, header.

The (line.strip() for line in source) generator expression strips away leading and trailing spaces. This gets rid of the newline characters at the end of each input line.

The default behavior of split() is to split on whitespace. In this case, it will create a number of words for complete records or header records, and a single word for a trailer record. If we had split on tab characters, some of this logic would be simplified.

That's left as an exercise for the reader.

If the len(words) is zero, the line is blank.

If the line matches the complete() function, we can yield it as one of the iterable results of the generator function. We also clear out the look-behind buffer, header.

If the line is a trailer and we have a buffered look-behind line, this is the two-physical-line case. We can assemble a complete record and emit it.

Otherwise, we don't know what the line is. It's a possible header line, so we'll save it for later examination.

This algorithm involves no regular expressions.

With Regular Expressions
An alternative would use three regular expressions to match the three kinds of lines.

import re
all_one_pat = re.compile("(.*)\t(.*)\t(.*)\t\t((?:Available)|(?:Expired))")
header_pat = re.compile("(.*)\t(.*)\t(.*)")
trailer_pat = re.compile("((?:Available)|(?:Expired))")

This has the advantage that we can then use the groups() method of each successful match to emit useful data instead of text which needs subsequent parsing. This leads to a slightly more robust process.
def emit_clean2(source):    header = None    for line in (line.strip() for line in source):        if len(line) == 0: continue        all_one_match = all_one_pat.match(line)        header_match = header_pat.match(line)        trailer_match = trailer_pat.match(line)        if all_one_match:            yield(all_one_match.groups())            header = None        elif header_match and not header:            header = header_match.groups()        elif trailer_match and header:            yield header + trailer_match.groups()            header = None        else:            pass # noise
The essential processing involves seeing which of the regular expressions match the line at hand. If it's all-in-one, this is good. We can yield the groups of meaningful data. If it's a header, we can save the groups. If it's a trailer, we can combine header and trailer groups and yield the composite.

This has the advantage of explicitly rejecting noise lines instead of treating each noise line as a possible header.
Categories: FLOSS Project Planets

Tryton News: New Tryton release 4.2

Planet Python - Mon, 2016-11-28 13:00

We are proud to announce the 4.2 release of Tryton.

With this release, Tryton extends its scope to tailored user applications like Chronos and also as backend for webservice. A part of the effort was put also on closing the feature gap between the web and the desktop client. The web client is still a little bit behind in terms of features but at the current rate the gap will disappear in few releases. This release contains many bug fixes and performance improvements. Polish is now an official language of Tryton.

Of course, migration from previous series is fully supported.

Major changes for the user
  • The tabs in list view can now have a counter showing to the user the number of records under them. The feature is activated by default on tabs where the number should tend to zero thus providing some hint to the user about pending tasks.

  • When creating a new record from the drop down menu of a relation, the form will have the value entered in the field as default value. This helps the user fill the form.

  • The buttons can now be configured to be clicked by a number of different users before being triggered. The user can see on the button how many clicks it already received and on the tooltip who clicked.

  • With the recent Neso retirement the database management from the client has been removed. This improve the security of the system by removing a potential attack vector.

  • It is now possible to define for each record a different color on the calendar view. This allow to group records visually.

  • The icons of the relation field has been improved. The experiences have shown that the old version had drawbacks which confused some users. The result was that some users thought they were searching for a new record while actually they were editing it.

    As a result the editing button has now been put in on the left and a new button to clear the current value has been put on the right of the field.

The web client completes its sets of functionalities in order to be closer to desktop client. The new features implemented in this release are:

  • The CSV Import/Export.

  • The calendar view based on the FullCalendar.

  • Support for translated fields.

  • The Favorites menu.

  • The date picker is now locale aware.

  • Add support for column sorting.

  • Support for confirm attribute on the buttons.

  • A comparison amount has been added on Balance Sheet and Income Statement, allowing the amounts to be compared with different date, fiscal year or periods.

  • The second currency of account is now enforced, closing the gap between the documentation and the code. Thus it is possible to compute the balance of such accounts in the account currency.

  • The creation of a tax can be quite complex. To ease the process, a testing wizard has been added allowing to see the result of the computation in order to validate the definition of the tax.

  • The payment term is no longer required. An invoice without payment term will create a single due line at the invoice date.
  • By default, the invoice reports are stored in the database when the invoice is posted to ensure the immutability of the document. But on large setup with a lot of invoices, the database becomes very huge and this can generate on overload and a waste of space for the backups. So a new configuration option has been added to store the invoice reports in the file store instead of the database. The option can be activated on existing databases. If the report is not found in the file store, the system will take the value found in the database as fallback.
  • The process to post invoices has been reviewed in order to have better performance when posting a large number of invoices.
  • The tax identifier of the company is stored on the invoice. This is useful when company has more than one identifier registered. By default, the system will take the first one.
  • A new group for Payment Approval has been added in order to have finer grained access to the functionalities.
  • The amount to pay of the invoice is now decreased by the amount of the related payments.
  • It is now possible to block the payment of a line.
  • It is possible to configure the module to always use the RCUR option for a SEPA mandate, as this is allowed by the European Payment Council since November 2016.
  • A statement line for an existing payment will mark the latter as succeeded or failed depending on the sign when the statement is validated. This ease the management of the payment state as some banks credit the bank account first and later if the payment fails, they debit it.
  • The statement line can also refer to payment group instead of individual payment. This is useful when the bank statement contains only one operation.
  • Many redundant fields with the account lines have been removed on the line. This simplifies the encoding and avoids mistakes.
  • A new type of analytic account has been introduced for distribution. The analytic lines will be divided when used with such account into many lines accordingly to the distribution ratio defined per sub-accounts.
  • The move line has now an analytic state which defines if the amount of the line is correctly set in each analytic axis/roots. This applies by default only on income lines. A menu entry allow to search for all lines that needs analytic correction.
  • The analytic chart enforces now the company consistency. It is no longer possible to attach an account to an axis of a different company.
  • The supplier invoice for a depreciable asset does not create analytic lines on posting. Indeed the analytic lines creation is postponed to depreciation moves.
  • A more generic address design has been added. It supports as many lines as needed for the streets instead of the previous limitation of two lines. Also the formatting of the address can be configured per country and language. The formats for about 65 countries are preconfigured.
  • A common problem with referential data like party is the duplication. It is common that the same party ended to be created twice in the system. For such issue, Tryton has now a wizard that allow to merge one party into another. The merged party is inactivated to prevent new usage and the remaining one inherits from all the documents. For example, the receivable amount will contain the sum of both parties.
  • SEPA Creditor identifier is now validated and unified into the party identifiers.
  • Phone numbers are automatically formatted now using the phonenumbers library.
  • A new relate from products to variants has been added in order to ease the navigation between them.
  • As the payment term is no longer required on the invoice, the same applies also to the purchase.
  • It is now possible to create a purchase request without product. In such case, a description is required and will be used for the creation of the purchase line.
  • When converting request into purchase, the requests containing the same product with the same unit will be merged into the same purchase line. This simplifies the purchase order.

A new module for managing purchase requisitions has been added. It allows users to create their purchase requisitions which will be approved or rejected by a member of the Approval group. On approval, purchase requests will be created.

  • As the payment term is no longer required on the invoice, the same applies also to the sale.
  • A lead time can be configured for internal shipments between warehouses. In such case the internal shipment uses a transit location during the time between the shipping and the reception.
  • A relate has been added to the location to show lots and their quantities inside the location.
  • The default location defined on the product is used in the output of production. This unifies the behaviour of this feature with incoming shipments.
  • The supply period of a supplier is configurable now. Before it was always 1 day.
Package Shipping

A new set of modules has been published. They provide integration with shipping services for printing labels and store shipping references per packages of shipments.

Two services are currently supported UPS and DPD.

  • It is possible to restrict the usage of carriers per country of origin and destination. Other criteria can be easily added. The carrier selection is already enforced by default on sales.
  • A Web-Extension named Chronos has been published. It allows to quickly encode timesheet entries using the new user application API (see below). The application supports to work offline and synchronises when the user is back online.

    It will be available soon on the different browser markets.

  • It is now possible to define a start and an end date for the employees. In such case, they will not be allowed to encode timesheet outside the period.

  • The work has be simplified by removing the tree structure. It was considered redundant with the tree structure of the projects.

  • The timesheet works are created automatically from the project form. This simplifies the management of projects.
  • Thanks to the new lead time per BoM and routing, start dates can be computed for productions. This allows to have a better schedule of the production needs for long running productions.

Following the work on previous versions, the production area has received two new modules to extend its functionalities:

Work Timesheet

It allows to track the time spent per production work.


Similar to the stock_split module, this module allows to split a production into several units.

Major changes for the developer
  • The desktop client support GTK-3 thanks to the pygtkcompat module. The support is still experimental and can be tested by setting GTK_VERSION=3 in the environment. The plan is to switch completely to GTK-3 in one year.
  • The server provides now a way to connect URL rules to a callable endpoint. This feature comes with a set of wrappers to simplify the work of the developer like instantiating the pool of the database name found in the URL or start the transaction with automatic retry on operational error. But the more interesting wrapper is the user_application which allows to authenticate a user with a key for a specific sets of endpoints. This feature allows to create applications that do not need to login on each use. Chronos is an example of such application.
  • The translations now support derivative languages. Most of the main language has been renamed without their country code. This allowed the merge of all the Latin American Spanish translations under one common language deriving from Spanish.
  • It is now possible to configure Binary fields to store the data in file store by setting the attribute file_id to the name of the char column which will contain the file store identifier.
  • A new level of access rights has been added targeting the buttons. It allows to define how many different users must click on the button to actually trigger it.
  • It is possible to configure a different cache than the MemoryCache. It just needs to define the fully qualified name of an alternative class in the cache configuration section.
  • A new widget has been implemented in the clients for the Char field that stores PYSON expression. The widget displays a human readable representation of the expression and uses an evaluated dump of the expression as internal value.
  • The read-only states for the fields xxx2Many is now limited only to the addition and suppression of records. The target records must manage themselves the read-only state for edition. This behavior allows for example to still edit the note field of a line of a validated sale while other fields are read-only.
  • To speed up the tests and especially the scenario, a new option has been added that allows to store clean dumps of database (per module installed). A dump will automatically be loaded instead of creating a new database, this operation is way faster. Of course, the developer is responsible for clearing the cache when the database schema definition has been modified.
  • A new mixin has been added to generalize the common pattern of ordering record with a sequence field. It is named sequence_ordered and can be customized with the name and the label of the sequence field, the default order and if null first must be used.

The login process has been reworked to be very customizable. It is now possible to plug any authentication factors without the need to adapt the clients.


The authentication_sms module allows to send by SMS a code to the mobile phone of the user that he will have to enter to proceed with the login.


The module web_user is the first of a new set of module which aims to provide facilities to developers who want to use Tryton as backend for web development.

Categories: FLOSS Project Planets

Free Software Directory meeting recap for November 25th, 2016

FSF Blogs - Mon, 2016-11-28 12:51

Every week free software activists from around the world come together in #fsf on irc.freenode.org to help improve the Free Software Directory. This recaps the work we accomplished on the Friday, November 25th, 2016 meeting.

Last week's theme was friends and family, where we asked our regular volunteers to invite new volunteers to participate in the weekly meeting. mangeurdenuage had some great success posting in the Trisquel forums and some other spaces and attracted two new volunteers. They then trained the new recruits and helped them to add their first entries. While we didn't get as many new volunteers as we might have hoped, we put together some ideas for attracting more volunteers going forward.

And while the theme was about new friends and family joining us, there was still a lot of work from previous meetings that needed more attention. David_Hedlund has taken the lead in working through new requirements and categories for free software in the directory. For any free software package, a user always has the ability to modify the work to meet their needs. But for someone newly introduced to a package by the directory, there can be some other pertinent information they would like to know before diving in and downloading the software. The work we're doing on the categories should eventually make it easy for volunteers to tag certain issues with packages, and have some template text displayed on the entry. This feeds off the work we did in tagging some works as historical, but addresses a wider range of issues. While the work is not complete, a lot of progress was made at the meeting and we hope to have this feature fully implemented in the near future.

If you would like to help update the directory, meet with us every Friday in #fsf on irc.freenode.org from 1 p.m. to 4 p.m. EST (18:00 to 20:00 UTC).

Categories: FLOSS Project Planets

Mentoring for Google Code-in – WikiToLearn

Planet KDE - Mon, 2016-11-28 12:35

Google Code-in has just begun. I’ll be mentoring this time.

If you know any pre-university students who are interested in computers or open source please do inform them about this. Task varies from coding, documentation, training, outreach, research, quality assurance and user interface. Also, students earn prizes for their successful completion of tasks.

What is Google Code-in ?

Google Code-in is a contest by Google to introduce pre-university students (ages 13-17) to open source software development. Since 2010, over 3200 students from 99 countries have completed work in the contest.

What I’ll be doing ?

I’ll be mentoring for tasks under WikiToLearn, KDE organization.
I have published a task related to WikiToLearn community : What can I do for WikiToLearn

I’ll be helping students with code and design for this task.

I have few other tasks in my mind. I may publish them as we move on (based on our progress).

Why I’m doing this ?

Well, I just love open source and like helping others to get into FOSS. And WikiToLearn, KDE is a great community to work with.
I strongly believe in it’s philosophy – “Knowledge only grows if shared”. It feels good to help the younger generation to get into community so that our community grows big.

Join WikiToLearn now and contribute however you can.

Categories: FLOSS Project Planets

Michal &#268;iha&#345;: phpMyAdmin security issues

Planet Debian - Mon, 2016-11-28 12:00

You might wonder why there is so high number of phpMyAdmin security announcements this year. This situations has two main reasons and I will comment a bit on those.

First of all we've got quite a lot of attention of people doing security reviews this year. It has all started with Mozilla SOS Fund funded audit. It has discovered few minor issues which were fixed in the 4.6.2 release. However this was really just the beginning of the story and the announcement has attracted quite some attention to us. In upcoming weeks the security@phpmyadmin.net mailbox was full of reports and we really struggled to handle such amount. Handling that amount actually lead to creating more formalized approach to handling them as we clearly were no longer able to deal with them based on email only. Anyway most work here was done by Emanuel Bronshtein, who is really looking at every piece of our code and giving useful tips to harden our code base and infrastructure.

Second thing which got changed is that we release security announcements for security hardening even when there might not be any practical attack possible. Typical example here might be PMASA-2016-61, where using hash_equals is definitely safer, but even if the timing attack would be doable here, the practical result of figuring out admin configured allow/deny rules is usually not critical. Many of the issues also cover quite rare setups (or server misconfigurations, which we've silently fixed in past) like PMASA-2016-54 being possibly caused by server executing shell scripts shipped together with phpMyAdmin.

Overall phpMyAdmin indeed got safer this year. I don't think that there was any bug that would be really critical, on the other side we've made quite a lot of hardenings and we use current best practices when dealing with sensitive data. On the other side, I'm pretty sure our code was not in worse shape than any similarly sized projects with 18 years of history, we just become more visible thanks to security audit and people looked deeper into our code base.

Besides security announcements this all lead to generic hardening of our code and infrastructure, what might be not that visible, but are important as well:

  • All our websites are server by https only
  • All our releases are PGP signed
  • We actively encourage users to verify the downloaded files
  • All new Git tags are PGP signed as well

Filed under: Debian English phpMyAdmin SUSE | 0 comments

Categories: FLOSS Project Planets

KDAB and Meiller – Tipper Truck App

Planet KDE - Mon, 2016-11-28 11:54

Design, Technical Excellence and Superb User Experience

Why does a tipper truck need an app? Meiller is the leading manufacturer of tippers in Europe. KDAB software developers and UI/UX designers worked with Meiller to create a mobile app that interacts with embedded hardware on the truck, allowing drivers to diagnose and fix problems – even when on the road. KDAB shows us how technical excellence and stunning user experience go hand in hand.

The post KDAB and Meiller – Tipper Truck App appeared first on KDAB.

Categories: FLOSS Project Planets

Kris Vanderwater: A response to Maxime: On Drupal and Drupal Commerce

Planet Drupal - Mon, 2016-11-28 11:30
A response to Maxime: On Drupal and Drupal Commerce by Kris Vanderwater -- 28 November 2016

This blog was originally intended as a comment on Maxime's medium post. It got long, and I am loath to create content for mega-sites. As such, I responded with a post of my own, which is exactly what Maxime did to Robert Douglass' original Facebook post... I guess we all have our competing standards ;-)

Categories: FLOSS Project Planets

Weekly Python Chat: Dictionaries in Python

Planet Python - Mon, 2016-11-28 11:30

Dictionaries are a a very useful data type but they can be a little more difficult than lists.

We'll start very basic but I do plan to answer more advanced dictionary questions toward the end of the chat.

Categories: FLOSS Project Planets

Doug Hellmann: filecmp — Compare Files — PyMOTW 3

Planet Python - Mon, 2016-11-28 09:00
The filecmp module includes functions and a class for comparing files and directories on the file system. Read more… This post is part of the Python Module of the Week series for Python 3. See PyMOTW.com for more articles from the series.
Categories: FLOSS Project Planets

Clint Adams: Not the Grace Hopper Conference

Planet Debian - Mon, 2016-11-28 08:56

Do you love porting? For ideas on how to make GHC suck less on your favorite architecture, see this not-at-all ugly table.

Categories: FLOSS Project Planets

OSTraining: What's Happening With Drupal Commerce in Drupal 8?

Planet Drupal - Mon, 2016-11-28 08:51

In the last few weeks, there's been some controversy in the Drupal community. Acquia launched a major partnership with Magento, which has left some people wondering about the support for Drupal Commerce. There had already been some nervousness, because Drupal Commerce 2 has been slow to arrive in Drupal 8.

So, what's happening with Drupal Commerce?

First, I would recommend you read Dries' post on Acquia's plans for e-commerce.

Next, I'd highly recommend that you watch this video from Ryan Szarma, one of the lead developers of Drupal Commerce. This video was recorded at Drupal 8 Day. Ryan covers the history, architecture, and features of Drupal Commerce 2 on Drupal 8. There was a lot of interest in Ryan's presentation, with over 30 minutes of questions afterwards.

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Jim Fulton

Planet Python - Mon, 2016-11-28 08:30

This week we welcome Jim Fulton (@j1mfulton) as our PyDev of the Week! He has been doing software development for over a quarter century. Jim is the chief architect of Zope, which is a object-oriented web application server written in Python. You will actually find various other Python packages using some Zope components, such as Twisted. Anyway, Jim has a nice website that goes over what he’s been up to over the years. You can also check out what projects he’s a part of on Github. Let’s take a few minutes to get to know our fellow Pythonista better!

Can you tell us a little about yourself (hobbies, education, etc):

I started my career working with rainfall-runoff models. I was in a combined BS/MS Civil Engineering/Systems Engineering Water-Resources program, where I explored rainfall-runoff model calibration. Later I applied rainfall-runoff models at the US Geological Survey. Over the years, my modeling work and work applying, supporting, and developing data-analysis software took me further and further into software engineering. Eventually, I switched to software engineering full time, after getting a Masters in software engineering and joining Digital Creations, which later became Zope Corporation.

Since joining Digital Creations/Zope Corporation I’ve been fortunate to help create the Zope ecosystem and work on a variety of interesting projects.

I’ve been using Python since 1992, have been an on-again and off-again Python contributor and was involved in early efforts to promote Python, such as the PSA and early conferences. I was at SPAM I, hosted SPAM II and SPAM III at the USGS, and was sad to see “SPAM” replaced by IPC :), but am really impressed with the way PyCon(s) has evolved.

As far as hobbies, I most enjoy solving practical problems, from software problems, to projects around the house, to roasting my own coffee to get coffee that tastes like coffee.

Why did you start using Python?

At the USGS, we were using Rand RDB, a system for data wrangling that used the Unix shell with some specialized programs as operators to provide a 4GL, for data analyses. This was written in perl, and the most important operator provided data transformation using perl expressions. In the version of perl at the time, perl4, perl was mostly geared towards text processing, and generally took a very permissive attitude toward expressions. For example, the expression 1 + ‘4GL’ + ‘hello’ evaluated without error to 5. I wanted more control and asked on the Perl mailing list about possible object-oriented features to give me control over expression evaluation in my applications. Tom Christiansen let me know in no uncertain terms that Perl would never support object-oriented programming.

Categories: FLOSS Project Planets

Eli Bendersky: Some notes on the Y combinator

Planet Python - Mon, 2016-11-28 08:25

The goal of this post is to jot down a few notes about the Y combinator, explaining how it works without getting too much into lambda-calculus theory. I'll be using Clojure and Python as the demonstration languages.

The idea is to build up intuition for the Y combinator from simple examples in a way that makes understanding it a sequences of small mental leaps rather than one large one.

Recursion with named functions

It wouldn't be a proper article about recursion if it didn't start with a factorial. Here's a fairly run-of-the-mill implementation in Clojure:

(defn factorial-rec [n] (if (zero? n) 1 (* n (factorial-rec (- n 1)))))

Recursion is accomplished by invoking the function, by name, within itself. Herein begins the thought experiment that will lead us to the Y combinator. Imagine that we're using a language where functions have no names - they're all anonymous. We can assign anonymous functions to symbols, but those symbols aren't visible or usable from within the function's body.

As an example of what I'm talking about, here is a non-recursive implementation of factorial in Clojure:

(def factorial-loop (fn [n] (loop [i n answer 1] (if (zero? i) answer (recur (- i 1) (* answer i))))))

Note how this is defined: we assign an anonymous function (lambda in Lisp/Scheme/Python parlance, fn in Clojure) to the symbol factorial-loop. This anonymous function computes the factorial of its parameter, and we can call it as follows:

ycombinator.core=> (factorial-loop 6) 720

To emphasize that factorial-loop is just a convenience symbol and plays no role in the implementation, we can forego it for a slightly more convoluted invocation:

ycombinator.core=> ((fn [n] #_=> (loop [i n answer 1] #_=> (if (zero? i) #_=> answer #_=> (recur (- i 1) (* answer i))))) 6) 720

No names in sight - we just invoke the anonymous function directly. But this implementation of factorial isn't recursive, so we don't really need to refer to the function's name from within its body. What if we do want to use recursion? This brings us back to the thought experiment.

Recursion with anonymous functions

It turns out this is absolutely possible by using some ingenuity and cranking the abstraction level up one notch. In our original factorial-rec, at the point where the function invokes itself all we need is an object that implements factorial, right? In factorial-rec we're using the fact that the symbol factorial-rec is bound to such an object (by the nature of defn). But we can't rely on that in our thought experiment. How else can we get access to such an object? Well, we can take it as a parameter... Here's how:

(def factorial-maker (fn [self] (fn [n] (if (zero? n) 1 (* n ((self self) (- n 1)))))))

And now we can compute factorials as follows:

ycombinator.core=> ((factorial-maker factorial-maker) 6) 720

A few things to note:

  1. factorial-maker is not computing a factorial. It creates an (anonymous) function that computes a factorial. It expects to be passed itself as a parameter.
  2. The expression (factorial-maker factorial-maker) does precisely that. It invokes factorial-maker and passes it itself as a parameter. The result of that is a function that computes a factorial, which we then apply to 6.
  3. The recursion inside the factorial is replaced by (self self); when the function created by (factorial-maker factorial-maker) runs for the first time, self is assigned to factorial-maker, so (self self) is (factorial-maker factorial maker). This is equivalent to the first call - recursion!

You may still feel uncomfortable about the def and the name factorial-maker. Aren't we just cheating? Nope, because we can do the same expansion as we did with factorial-loop; we don't need that def. Here's how it would look:

ycombinator.core=> (((fn [self] #_=> (fn [n] #_=> (if (zero? n) #_=> 1 #_=> (* n ((self self) (- n 1)))))) #_=> (fn [self] #_=> (fn [n] #_=> (if (zero? n) #_=> 1 #_=> (* n ((self self) (- n 1))))))) 6) 720

Pretty it is not... But hey, we've now implemented a recursive factorial function, without a single name in sight. How cool is that?

Understanding the example above is about 80% of the way to understanding the Y combinator, so make sure to spend the time required to thoroughly grok how it works. Tracing through the execution for 2-3 calls while drawing the "environments" (call frames) in action helps a lot.

To get a better feel of the direction we're taking, here's another recursive function that's slightly more complex than the factorial:

(defn tree-sum-rec [t] (if (nil? t) 0 (let [[nodeval left right] t] (+ nodeval (tree-sum-rec left) (tree-sum-rec right)))))

Given a binary tree represented as a list-of-lists with numbers for node deta, this function computes the sum of all the nodes in the tree. For example:

ycombinator.core=> (def t1 '(1 (2) (4 (3) (7)))) #'ycombinator.core/t1 ycombinator.core=> (tree-sum-rec t1) 17

We can rewrite it without using any symbol names within the function as follows:

(def tree-sum-maker (fn [self] (fn [t] (if (nil? t) 0 (let [[nodeval left right] t] (+ nodeval ((self self) left) ((self self) right)))))))

And invoke it as follows:

ycombinator.core=> ((tree-sum-maker tree-sum-maker) t1) 17

Note the similarities between tree-sum-maker and factorial-maker. They are transformed very similarly to synthesize the unnamed from the named-recursion variant. The recipe seems to be:

  1. Instead of a function taking a parameter, create a function factory that accepts itself as the self parameter, and returns the actual computation function.
  2. In every place where we'd previously call ourselves, call (self self) instead.
  3. The initial invocation of (foo param) is replaced by ((foo-maker foo-maker) param).
Y combinator - a tool for making anonymous functions recursive

Since there is a clear pattern here, we should be able to abstract it away and provide some method that transforms a given named-recursive function into an unnamed variant. This is precisely what the Y combinator does, though the nature of the problem makes it somewhat obscure at first sight:

(def Ycombinator (fn [func] ((fn [self] (func (fn [n] ((self self) n)))) (fn [self] (func (fn [n] ((self self) n)))))))

I'll explain how it works shortly, but first let's see how we use it. We have to write our factorial as follows:

(def factorial-rec* (fn [recurse] (fn [n] (if (zero? n) 1 (* n (recurse (- n 1)))))))

Note the superficial similarity to the factorial-maker version. factorial-rec* also takes a function and returns the actual function computing the factorial, though in this case I don't call the function parameter self (it's not self in the strict sense, as we'll soon see). We can convert this function to a recursive computation of the factorial by invoking the Y combinator on it:

ycombinator.core=> ((Ycombinator factorial-rec*) 6) 720

It's easiest to understand how Ycombinator does its magic by unraveling this invocation step by step. Similarly to how we did earlier, we can get rid of the Ycombinator name and just apply the object it's defined to be directly:

(((fn [func] ((fn [self] (func (fn [n] ((self self) n)))) (fn [self] (func (fn [n] ((self self) n)))))) factorial-rec*) 6)

As before, this does two things:

  1. Call the Y combinator (just a scary-looking anonymous function) on factorial-rec*.
  2. Call the result of (1) on 6.

If you look carefully at step 1, it invokes the following anonymous function:

(fn [self] (func (fn [n] ((self self) n))))

On itself, with func bound to factorial-rec*. So what we get is:

(((fn [self] (factorial-rec* (fn [n] ((self self) n)))) (fn [self] (factorial-rec* (fn [n] ((self self) n))))) 6)

And if we actually perform the call:

((factorial-rec* (fn [n] (((fn [self] (factorial-rec* (fn [n] ((self self) n)))) (fn [self] (factorial-rec* (fn [n] ((self self) n))))) n))) 6)

This calls factorial-rec*, passing it an anonymous function as recurse [1]. factorial-rec* returns a factorial-computing function. This is where the first step ends. Invoking this factorial-computing function on 6 is the second step.

It should now be obvious what's going on. When the invocation with 6 happens and the program gets to calling recurse, it calls the parameter of factorial-rec* as shown above. But we've already unwrapped this call before - it... recurses into factorial-rec*, while propagating itself forward so that the recurse parameter is always bound properly. It's just the same trick as was employed by factorial-maker earlier in the post.

So, the Y combinator is the magic sauce that lets us take code like factorial-rec and convert it into code like factorial-maker. Here's how we can implement an unnamed version of tree-sum-rec:

(def tree-sum-rec* (fn [recurse] (fn [t] (if (nil? t) 0 (let [[nodeval left right] t] (+ nodeval (recurse left) (recurse right)))))))

And using it with the Y combinator:

ycombinator.core=> ((Ycombinator tree-sum-rec*) t1) 17

Here is an alternative formulation of the Y combinator that can make it a bit easier to understand. In this version I'm using named Clojure functions for further simplification (since many folks find the syntax of anonymous functions applied to other anonymous functions too cryptic):

(defn apply-to-self [func] (func func)) (defn Ycombinator-alt [func] (apply-to-self (fn [self] (func (fn [n] ((self self) n)))))) The Y combinator in Python

Finally, just to show that the Y combinator isn't something unique to the Lisp family of languages, here's a Python implementation:

ycombinator = lambda func: \ (lambda self: func(lambda n: (self(self))(n)))( lambda self: func(lambda n: (self(self))(n))) factorial = lambda recurse: \ lambda n: \ 1 if n == 0 else n * recurse(n - 1)

And we can invoke it as follows:

>>> (ycombinator(factorial))(6) 720

There's no real difference between the Python and the Clojure versions. As long as the language supports creating anonymous functions and treats them as first-class citizens, all is good.

It's even possible to create the Y combinator in C++. Static typing makes it somewhat less elegant than in the more dynamic languages, but C++14's generic lambdas help a lot. Take a look at Rosetta Stone for an example.


Incidentally, note that by starting with (Ycombinator factorial-rec*), we now got to (factorial-rec* (Ycombinator factorial-rec*)). For this reason, the Y combinator is a fixed-point combinator in lambda calculus.

There's another interesting thing to note here - the equivalence mentioned above is imperfect. The call (Ycombinator factorial-rec*) results in a delayed fixed point equivalence (the delay achieved by means of wrapping the result in a fn). This is because we're using Clojure - an eagerly evaluated language. This version of the Y combinator is called the applicative-order Y combinator. Without the delay, we'd get an infinite loop. In lazily evaluated languages, it's possible to define the Y combinator somewhat more succinctly.

All of this is very interesting, but I'm deliberately avoiding getting too deep into lambda calculus and programming language theory in this post; I may write more about it some time in the future.

Categories: FLOSS Project Planets

Qt World Summit 2016 Webinar Series – Christmas Sessions

Planet KDE - Mon, 2016-11-28 08:13

‘Tis the season to be jolly and as always we are just trying to be Qt /kjuːt/. We just keep on giving and giving and here is another present for you. We are hosting webinars based on the breakout sessions from The Qt World Summit 2016. So, grab a cup of cocoa and sign up for our December Tuesday webinars where you can join our R&D developers online for technical sessions that will keep your computer warm throughout 2017. The best thing of all – even if you can’t make it online – by signing up Santa will bring the recorded session to you.

Introducing Qt Visual Studio Tools

December 6th at 5 pm CET, by Maurice Kalinowski

New possibilities with Qt WebEngine

December 13th at 5 pm CET, by Allan Sandfeld Jensen

Qt Quick Scene Graph Advancements in Qt 5.8 and Beyond 

December 20th at 10 am CET (Rescheduled from November 15th), by Laszlo Agocs


Also, stay tuned for details on upcoming webinars in January!

Make sure to check our events calendar for the full list of Qt-related events delivered by us and our partners.

The post Qt World Summit 2016 Webinar Series – Christmas Sessions appeared first on Qt Blog.

Categories: FLOSS Project Planets
Syndicate content