FLOSS Project Planets

undpaul: Make your styleguide a living styleguide!

Planet Drupal - Tue, 2014-10-21 02:09

Don't you know that, too? You or your team is building a site and during this process all implemented parts are styled through templates and CSS. The CSS files (at best you are using a CSS preprocessor like SASS) are getting bigger, more sophisticated and even more confusing - not to mention that these files are getting almost unmaintainable and more and more error-prone.

drupal planet englishCSS
Categories: FLOSS Project Planets

Thomas Goirand: OpenStack Juno is out, Debian (and Ubuntu Trusty ports) packages ready

Planet Debian - Tue, 2014-10-21 01:45

This is just a quick announce: Debian packages for Juno are out. In fact, they were ready the day of the release, on the 16th of October. I uploaded it all (to Experimental) the same day, literally a few hours after the final released was git tagged. But I had no time to announce it.

This week-end, I took the time to do an Ubuntu Trusty port, which I also publish (it’s just a mater of rebuilding all, and it should work out of the box). Here are the backports repositories. For Wheezy:

deb http://archive.gplhost.com/debian juno-backports main

deb http://archive.gplhost.com/debian juno main

For trusty:

deb http://archive.gplhost.com/debian trusty-juno-backports main

But of course, everything is also available directly in Debian. Since Sid/Jessie contains OpenStack Icehouse (which has more chance to receive long enough security support), and it will be like this until Jessie is released. So I have uploaded all of Juno into Debian Experimental. This shows on the OpenStack qa page (you may also notice that the team is nearly reaching 200 packages… though am planning to off-load some of that to the Python module team, when the migration to Git will be finished). On the QA page, you may also see that I uploaded all of the last Icehouse point release to Sid, and that all packages migrated to Jessie. There’s only a few minor issues with some Python modules which I fixed, that haven’t migrated to Jessie yet.

I can already tell that all packages can be installed without an issue, and that I know Horizon at least works as expected. But I didn’t have time to test it all just yet. I’m currently working on doing even more installation automation at the package level (by providing some OVS bridging init script and such, to make it more easy to run Tempest functional testing). I’ll post more about this when it’s ready.

Categories: FLOSS Project Planets

Jasha Joachimsthal: iSwitched

Planet Apache - Tue, 2014-10-21 01:41

After 5 years of using different Android phones I switched to an iPhone. In 2009 I bought an HTC Hero, which was replaced by the HTC Desire a year later. Then my former employer gave me a Samsung Galaxy S2 which was replaced by the S3 after I switched jobs. I was about to switch to the Sony Xperia Z3 but I decided to go for an iDevice: the iPhone 6.

What do I miss in iOS so far? Going back

I miss the back button on the phone. In iOS it’s up to the application to provide a back link somewhere in the screen. Usually in the top left corner, but sometimes it’s not there. That top left corner is hard to reach with one hand and Apple has created a workaround for that: tap (not click) the home button twice and the screen goes down so you can reach this button. Adding that extra physical button would have been easier. Another difference is that the back button is only within the context of the current application while the Android back button can bring you back to the previous application if that had opened another application.

No widgets on the home screen

When I opened the S3 I saw the latest weather information on my home screen. On the next screen I had an monthly overview of my calendar. On the iPhone there are only app icons. The weather and upcoming appointments for today are in the Notification Centre. It’s not in my system yet to open the Notification Centre or the Calendar app to see what’s on my schedule for the next week/month (and yes I already forgot an appointment).

Swype

It was one of the first apps I bought, but unfortunately there’s no dictionary available for Dutch yet. Let’s hope that’s just a matter of time, because Swype supports many more languages in the Android version.

Notification LED

The S3 had a blinking LED to notify me of unread messages: dark blue for mail and SMS, light blue for WhatsApp and green for Telegram and purple for MeetUp. It kept flashing until I took action to read the message or remove the notification. The iPhone blinks shortly when a message arrives, but that’s it. I don’t have my phone on me all of the time and sometimes I don’t hear or feel it, so that blinking LED was handy.

What is better? Integration with iTunes

Samsung, I like your devices but Kies sucks. Most of the time it starts, but not always. When it has started it may or may not recognise the phone. After sacrificing a goat to the gods of Samsung it may also finish a back up successfully. More often it failed in the back up (luckily I never needed a restore) or didn’t recognise the phone. The KiesViaWifiAgent was turning my MacBook into a heating fan. iTunes just works. It recognises the phone, makes back ups and installs iOS updates.

Touch ID

The US department of Homeland Security took prints of all my fingers when I wanted to enter the US. The Dutch government wanted 2 finger prints for my new passport.  Now Apple also has a finger print of me to unlock my phone or authorise purchases (but I have no clue what else they do with it).

Permissions

I didn’t switch because of the Apple logo. Although I’ve been a fan of Apple’s desktop OS since System 6, I’ve never been attracted to the iOS. The main reason I switched from Android to iOS are the permissions apps get. In Android it’s an all or nothing decision. For instance the Facebook app wants access to my contacts, calendar, SMS and call history (and a lot more). If you don’t want to give permission to let the app access your SMS history, then you can’t install the app. This is different in iOS, where you as user can control whether the app gets access to your contacts or calendar (SMS and call history are normally inaccessible by apps). More information is in the article iOS Has App Permissions, Too: And They’re Arguably Better Than Android’s.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-20

Planet Apache - Mon, 2014-10-20 19:58
  • Load testing Apache Kafka on AWS

    This is a very solid benchmarking post, examining Kafka in good detail. Nicely done. Bottom line:

    I basically spend 2/3 of my work time torture testing and operationalizing distributed systems in production. There’s some that I’m not so pleased with (posts pending in draft forever) and some that have attributes that I really love. Kafka is one of those systems that I pretty much enjoy every bit of, and the fact that it performs predictably well is only a symptom of the reason and not the reason itself: the authors really know what they’re doing. Nothing about this software is an accident. Performance, everything in this post, is only a fraction of what’s important to me and what matters when you run these systems for real. Kafka represents everything I think good distributed systems are about: that thorough and explicit design decisions win.

    (tags: testing aws kafka ec2 load-testing benchmarks performance)

Categories: FLOSS Project Planets

Mike Driscoll: PyWin32 – How to Bring a Window to Front

Planet Python - Mon, 2014-10-20 18:15

I recently saw someone asking how to bring a window to the front in Windows and I realized I had had some old unreleased code that might help someone with this task. A long time ago, Tim Golden (and possibly some other fellows on the PyWin32 mailing list) showed me how to make windows come to the front on Windows XP. If you’d like to follow along, you will need to download and install your own copy of PyWin32.

We will need to choose something to bring to the front. I like to use Notepad for testing as I know it will be on every Windows desktop in existence. Open up Notepad and then put some other application’s window in front of it.

Now we’re ready to look at some code:

import win32gui   def windowEnumerationHandler(hwnd, top_windows): top_windows.append((hwnd, win32gui.GetWindowText(hwnd)))   if __name__ == "__main__": results = [] top_windows = [] win32gui.EnumWindows(windowEnumerationHandler, top_windows) for i in top_windows: if "notepad" in i[1].lower(): print i win32gui.ShowWindow(i[0],5) win32gui.SetForegroundWindow(i[0]) break

We only need PyWin32’s win32gui module for this little script. We write a little function that takes a window handle and a Python list. Then we call win32gui’s EnumWindows method, which takes a callback and an extra argument that is a Python object. According to the documentation, the EnumWindows method “Enumerates all top-level windows on the screen by passing the handle to each window, in turn, to an application-defined callback function”. So we pass it our method and it enumerates the windows, passing a handle of each window plus our Python list to our function. It works kind of like a messed up decorator.

Once that’s done, your top_windows list will be full of lots of items, most of which you didn’t even know were running. You can print that our and inspect your results if you like. It’s really quite intereting. But for our purposes, we will skip that and just loop over the list, looking for the word “Notepad”. Once we find it, we use win32gui’s ShowWindow and SetForegroundWindow methods to bring the application to the foreground.

Note that really need to look for a unique string so that you bring up the right window. What would happen if you had multiple Notepad instance running with different files open? With the current code, you would bring the first Notepad instance that it found forward, which might not be what you want.

You may be wondering why anyone would even want to go to the trouble of doing this in the first place. In my case, I once had a project where I had to bring a certain window to the foreground and enter automate it using SendKeys. It was an ugly piece of brittle code that I wouldn’t wish on anyone. Fortunately, there are better tools for that sort of thing nowadays such as pywinauto, but you still might find this code helpful in something esoteric that is thrown your way. Have fun!

Note: This code was tested using Python 2.7.8 and PyWin32 219 on Windows 7.

Categories: FLOSS Project Planets

Carl Trachte: subprocess.Popen() or Abusing a Home-grown Windows Executable

Planet Python - Mon, 2014-10-20 17:11
Each month I redo 3D block model interpolations for a series of open pits at a distant mine.  Those of you who follow my twitter feed often see me tweet, "The 3D geologic block model interpolation chuggeth . . ."  What's going on is that I've got all the processing power maxed out dealing with millions of model blocks and thousands of data points.  The machine heats up and with the fan sounds like a DC-9 warming up before flight.

All that said, running everything roughly in parallel is more efficient time-wise than running it sequentially.  An hour of chugging is better than four.  The way I've been doing this is using the Python (2.7) subprocess module's Popen method, running my five interpolated values in parallel.  Our Python programmer Lori originally wrote this to run in sequence for a different set of problems.  I bastardized it for my own.

The subprocess part of the code is relatively straightforward.  Function startprocess() in my code covers that.

What makes this problem a little more challenging:

1) it's a vendor supplied executable we're dealing with . . . without an API or source . . . that's interactive (you can't feed it the config file path; it asks for it).  This results in a number of time.sleep() and <process>.stdin.write() calls that can be brittle.

2) getting the processes started, as I just mentioned, is easy.  Finding out when to stop, or kill them, requires knowledge of the app and how it generates output.  I've gone for an ugly, but effective check of report file contents.

3) while waiting for the processes to finish their work, I need to know things are working and what's going on.  I've accomplished this by reporting the data files' sizes in MB.

4) the executable isn't designed for a centralized code base (typically all scripts are kept in a folder for the specific project or pit), so it only allows about 100 character columns in the file paths sent to it.  I've omitted this from my sanitized version of the code, but it made things even messier than they are below.  Also, I don't know if all Windows programs do this, but the paths need to be inside quotes - the path kept breaking on the colon (:) when not quoted.

Basically, this is a fairly ugly problem and a script that requires babysitting while it runs.  That's OK; it beats the alternative (running it sequentially while watching each run).  I've tried to adhere to DRY (don't repeat yourself) as much as possible, but I suspect this could be improved upon.

The reason why I blog it is that I suspect there are other people out there who have to do the same sort of thing with their data.  It doesn't have to be a mining problem.  It can be anything that requires intensive computation across voluminous data with an executable not designed with a Python API.

Notes: 

1) I've omitted the file multirunparameters.py that's in an import statement.  It has a bunch of paths and names that are relevant to my project, but not to the reader's programming needs.

2) python 2.7 is listed at the top of the file as "mpython."  This is the Python that our mine planning vendor ships that ties into their quite capable Python API.  The executable I call with subprocess.Popen() is a Windows executable provided by a consultant independent of the mine planning vendor.  It just makes sense to package this interpolation inside the mine planning vendor's multirun (~ batch file) framework as part of an overall working of the 3D geologic block model.  The script exits as soon as this part of the batch is complete.  I've inserted a 10 second pause at the end just to allow a quick look before it disappears.

#!C:/MineSight/x64/mpython

"""
Interpolate grades with <consultant> program
from text files.
"""

import argparse
import subprocess as subx
import os
import collections as colx
import time
from datetime import datetime as dt

# Lookup file of constants, pit names, assay names, paths, etc.
import multirunparameters as paramsx

parser = argparse.ArgumentParser()
# 4 letter argument like 'kwat'
# Feed in at command line.
parser.add_argument('pit', help='four letter, lower case pit abbreviation (kwat)', type=str)
args = parser.parse_args()
PIT = args.pit

pitdir = paramsx.PATHS[PIT]
pathx = paramsx.BASEPATH.format(pitdir)
controlfilepathx = paramsx.CONTROLFILEPATH.format(pitdir)

timestart = dt.now()
print(timestart)

PROGRAM = 'C:/MSPROJECTS/EOMReconciliation/2014/Multirun/AllPits/consultantprogram.exe'

ENDTEXT = 'END <consultant> REPORT'

# These names are the only real difference between pits.
# Double quote is for subprocess.Popen object's stdin.write method
# - Windows path breaks on colon without quotes.
ASSAY1DRIVER = 'KDriverASSAY1{:s}CBT.csv"'.format(PIT)
ASSAY2DRIVER = 'KDriverASSAY2{:s}CBT.csv"'.format(PIT)
ASSAY3DRIVER = 'KDriverASSAY3_{:s}CBT.csv"'.format(PIT)
ASSAY4DRIVER = 'KDriverASSAY4_{:s}CBT.csv"'.format(PIT)
ASSAY5DRIVER = 'KDriverASSAY5_{:s}CBT.csv"'.format(PIT)

RETCHAR = '\n'

ASSAY1 = 'ASSAY1'
ASSAY2 = 'ASSAY2'
ASSAY3 = 'ASSAY3'
ASSAY4 = 'ASSAY4'
ASSAY5 = 'ASSAY5'

NAME = 'name'
DRFILE = 'driver file'
OUTPUT = 'output'
DATFILE = 'data file'
RPTFILE = 'report file'

# data, report files
ASSAY1K = 'ASSAY1K.csv'
ASSAY1RPT = 'ASSAY1.RPT'
ASSAY2K = 'ASSAY2K.csv'
ASSAY2RPT = 'ASSAY2.RPT'
ASSAY3K = 'ASSAY3K.csv'
ASSAY3RPT = 'ASSAY3.RPT'
ASSAY4K = 'ASSAY4K.csv'
ASSAY4RPT = 'ASSAY4.RPT'
ASSAY5K = 'ASSAY5K.csv'
ASSAY5RPT = 'ASSAY5.RPT'

OUTPUTFMT = '{:s}output.txt'

ASSAYS = {1:{NAME:ASSAY1,
             DRFILE:controlfilepathx + ASSAY1DRIVER,
             OUTPUT:pathx + OUTPUTFMT.format(ASSAY1),
             DATFILE:pathx + ASSAY1K,
             RPTFILE:pathx + ASSAY1RPT},
          2:{NAME:ASSAY2,
             DRFILE:controlfilepathx + ASSAY2DRIVER,
             OUTPUT:pathx + OUTPUTFMT.format(ASSAY2),
             DATFILE:pathx + ASSAY2K,
             RPTFILE:pathx + ASSAY2RPT},
          3:{NAME:ASSAY3,
             DRFILE:controlfilepathx + ASSAY3DRIVER,
             OUTPUT:pathx + OUTPUTFMT.format(ASSAY3),
             DATFILE:pathx + ASSAY3K,
             RPTFILE:pathx + ASSAY3RPT},
          4:{NAME:ASSAY4,
             DRFILE:controlfilepathx + ASSAY4DRIVER,
             OUTPUT:pathx + OUTPUTFMT.format(ASSAY4),
             DATFILE:pathx + ASSAY4K,
             RPTFILE:pathx + ASSAY4RPT},
          5:{NAME:ASSAY5,
             DRFILE:controlfilepathx + ASSAY5DRIVER,
             OUTPUT:pathx + OUTPUTFMT.format(ASSAY5),
             DATFILE:pathx + ASSAY5K,
             RPTFILE:pathx + ASSAY5RPT}}

DELFILE = 'delete file'
INTERP = 'interp'
SLEEP = 'sleep'
MSGDRIVER = 'message driver'
MSGRETCHAR = 'message return character'
FINISHED1 = 'finished one assay'
FINISHEDALL = 'finished all interpolations'
TIMEELAPSED = 'time elapsed'
FILEEXISTS = 'report file exists'
DATSIZE = 'data file size'
DONE = 'number interpolations finished'
DATFILEEXIST = 'data file not yet there'
SIZECHANGE = 'report file changed size'

# for converting to megabyte file size from os.stat()
BITSHIFT = 20
# sleeptime - 5 seconds
SLEEPTIME = 5
FINISHED = 'finished'
RPTFILECHSIZE = """
        
Report file for {:s}
changed size; killing process . . .
"""

MESGS = {DELFILE:'\n\nDeleting {} . . .\n\n',
         INTERP:'\n\nInterpolating {:s} . . .\n\n',
         SLEEP:'\nSleeping 2 seconds . . .\n\n',
         MSGDRIVER:'\n\nWriting driver file name to stdin . . .\n\n',
         MSGRETCHAR:'\n\nWriting retchar to stdin for {:s} . . .\n\n',
         FINISHED1:'\n\nFinished {:s}\n\n',
         FINISHEDALL:'\n\nFinished interpolation.\n\n',
         TIMEELAPSED:'\n\n{:d} elapsed seconds\n\n',
         FILEEXISTS:'\n\nReport file for {:s} exists . . .\n\n',
         DATSIZE:'\n\nData file size for {:s} is now {:d}MB . . .\n\n',
         DONE:'\n\n{:d} out of {:d} assays are finished . . .\n\n',
         DATFILEEXIST:"\n\n{:s} doesn't exist yet . . .\n\n",
         SIZECHANGE:RPTFILECHSIZE}

def cleanslate():
    """
    Delete all output files prior to interpolation
    so that their existence can be tracked.
    """
    for key in ASSAYS:
        files = (ASSAYS[key][DATFILE],
                 ASSAYS[key][RPTFILE],
                 ASSAYS[key][OUTPUT])
        for filex in files:
            print(MESGS[DELFILE].format(filex))
            if os.path.exists(filex) and os.path.isfile(filex):
                os.remove(filex)
    return 0

def startprocess(assay):
    """
    Start <consultant program> run for given interpolation.
    Return subprocess.Popen object,
    file object (output file).
    """
    print(MESGS[INTERP].format(ASSAYS[assay][NAME]))
    # XXX - I hate time.sleep - hack
    # XXX - try to re-route standard output so that
    #       it's not all jumbled together.
    print(MESGS[SLEEP])
    time.sleep(2)
    # output file for stdout
    f = open(ASSAYS[assay][OUTPUT], 'w')
    procx = subx.Popen('{0}'.format(PROGRAM), stdin=subx.PIPE, stdout=f)
    print(MESGS[SLEEP])
    time.sleep(2)
    # XXX - problem, starting up Excel CBT 22JUN2014
    #       Ah - this is what happens when the <software usb licence>
    #            key is not attached :-(
    print(MESGS[MSGDRIVER])
    print('\ndriver file = {:s}\n'.format(ASSAYS[assay][DRFILE]))
    procx.stdin.write(ASSAYS[assay][DRFILE])
    print(MESGS[SLEEP])
    time.sleep(2)
    # XXX - this is so jacked up -
    #       no idea what is happening when
    print(MESGS[MSGRETCHAR].format(ASSAYS[assay][NAME]))
    procx.stdin.write(RETCHAR)
    print(MESGS[SLEEP])
    time.sleep(2)
    print(MESGS[MSGRETCHAR].format(ASSAYS[assay][NAME]))
    procx.stdin.write(RETCHAR)
    print(MESGS[SLEEP])
    time.sleep(2)
    return procx, f

def crosslookup(assay):
    """
    From assay string, get numeric
    key for ASSAYS dictionary.
    Returns integer.
    """
    for key in ASSAYS:
        if assay == ASSAYS[key][NAME]:
            return key
    return 0

def checkprocess(assay, assaydict):
    """
    Check to see if assay
    interpolation is finished.
    assay is the item in question
    (ASSAY1, ASSAY2, etc.).
    assaydict is the operating dictionary
    for the assay in question.
    Returns True if finished.
    """
    # Report file indicates process finished.
    assaykey = crosslookup(assay)
    rptfile = ASSAYS[assaykey][RPTFILE]
    datfile = ASSAYS[assaykey][DATFILE]
    if os.path.exists(datfile) and os.path.isfile(datfile):
        # Report size of file in MB.
        datfilesize = os.stat(datfile).st_size >> BITSHIFT
        print(MESGS[DATSIZE].format(assay, datfilesize))
    else:
        # Doesn't exist yet.
        print(MESGS[DATFILEEXIST].format(datfile))
    if os.path.exists(rptfile) and os.path.isfile(rptfile):
        # XXX - not the most efficient way,
        #       but this checking the file appears
        #       to work best.
        f = open(rptfile, 'r')
        txt = f.read()
        f.close()
        # XXX - hack - gah.
        if txt.find(ENDTEXT) > -1:
            # looking for change in reportfile size
            # or big report file
            print(MESGS[SIZECHANGE].format(assay))
            print(MESGS[SLEEP])
            time.sleep(2)
            return True
    return False

PROCX = 'process'
OUTPUTFILE = 'output file'

# Keeps track of files and progress of <consultant program>.
opdict = colx.OrderedDict()

# get rid of preexisting files
cleanslate()

# start all five roughly in parallel
# ASSAYS keys are numbers
for key in ASSAYS:
    # opdict - ordered with assay names as keys
    namex = ASSAYS[key][NAME]
    opdict[namex] = {}
    assaydict = opdict[namex]
    assaydict[PROCX], assaydict[OUTPUTFILE] = startprocess(key)
    # Initialize active status of process.
    assaydict[FINISHED] = False

# For count.
numassays = len(ASSAYS)
# Loop until all finished.
while True:
    # Cycle until done then break.
    # Sleep SLEEPTIME seconds at a time and check between.
    time.sleep(SLEEPTIME)
    # Count.
    i = 0
    for key in opdict:
        assaydict = opdict[key]
        if not assaydict[FINISHED]:
            status = checkprocess(key, assaydict)
            if status:
                # kill process when report file changes
                opdict[key][PROCX].kill()
                assaydict[FINISHED] = True
                i += 1
        else:
            i += 1
    print(MESGS[DONE].format(i, numassays))
    # all done
    if i == numassays:
        break

print('\n\nFinished interpolation.\n\n')
timeend = dt.now()
elapsed = timeend - timestart

print(MESGS[TIMEELAPSED].format(elapsed.seconds))
print('\n\n{:d} elapsed minutes\n\n'.format(elapsed.seconds/60))

# Allow quick look at screen.
time.sleep(10)


Categories: FLOSS Project Planets

Drupal @ Penn State: Drupal speed tuning: analyzing and further optimizing Pressflow

Planet Drupal - Mon, 2014-10-20 16:46

TL;DR: I've created a fork of Pressflow for the purposes of conversation and analysis -- https://github.com/btopro/Presser-Flow-FORK

History lesson

Categories: FLOSS Project Planets

Tryton News: New Tryton release 3.4

Planet Python - Mon, 2014-10-20 14:00

We are proud to announce the 3.4 release of Tryton.

In addition to the usual improvements of existing features for users and developers, this release has seen a lot of work done on the accounting part.

Of course, migration from previous series is fully supported with the obvious exception of the ldap_connection module which was removed.

Major changes in graphical user interface
  • The search of relation record has been re-worked to take advantage of the auto-completion. The search box of the pop-up window is filled with the text entered in the widget.

  • The search/open button of the Many2One widget is now inside the entry box and the create button is removed in favor of auto-completion actions or pop-up button. This change allow to harmonize the size of all widgets inside a form.

  • A new image widget is available on list/tree view.

  • The client can now perform a pre-validation before executing a button action. The validation is based on a domain and so the offending fields can be highlighted and focused instead of having an error message pop-up.

  • The selection label are now available in addition of the internal value for the export data (CSV) functionality.

  • The export data window is now predefined with the fields of the current view. This gives a fast way to export what you see.

  • The predefined export can now be replaced directly with a new selection of fields. This eases the process of creating such predefined exportation.

  • It is now possible to re-order the list of the exported fields using drag and drop.

  • The range operator of the search box is now including on both endpoints. It appears to be less astonishing behavior for users even if the previous behavior including-excluding had some practical advantages.

  • The client loads now plug-ins defined in the user local directory (~/.config/tryton/x.y/plugins).

Major changes on the server side
  • A new Mixin MatchMixin is introduced. It allows to implement a common pattern in Tryton to find records that match certain values.
  • Another UnionMixin is also introduced. It allows to define a ModelSQL which is the UNION of some ModelSQL's.
  • Actually, Tryton doesn't update a record defined in a XML file if this one has been modified outside the XML. Now, it is possible to find those records and force the update to get the record synchronised with the XML.
  • A Python descriptor has been added to the Selection field. It allows to define an attribute on a Model which will contains the selection label of the record. It is planned to update all the reports to use such descriptor instead of hard-coded values.
  • A new configuration file format is introduced for the server. It is easily extendable to be used by modules. For example, the ldap_authentication module starts using it in replacement of the removed ldap_connection.
  • It is now possible to give a logging configuration files to setup the server logging. This file uses the Python logging configuration format.
  • The context defined on relation fields are now used to instantiate the target.
  • The SQL clause for a domain on a field can be now customized using a domain_<field> method. This method allows in some cases a more efficient SQL query. The method is designed to support joins.
  • The access rights has been reworked to be active only on RPC calls. With this design, Tryton follows the principle of checking input on the border of the application. So it is no more required to switch to the root user when calling methods requiring some specific access rights as far as it is not from an RPC call.
Modules Account
  • A new wizard to help reconcile all accounts has been added. It loops over each account and party and makes a proposal of lines to reconcile if it could find one. This really speeds up the reconciliation task.

  • There is also another new wizard to ease the creation of cancellation moves. The wizard also reconciles automatically the line with the cancelled sibling.

  • A new option Party Required on account has been added. This option makes the party required for move lines of this account and forbids it for others.

Account Invoice
  • It is now possible to configure which tax rounding to use. There are two ways implemented: per document and per line. The default stays per document.
Account Payment
  • It is now possible to change a succeeded payment to failed.
Account Payment SEPA
  • The scheme Business to Business is supported for direct debit.
  • The mandate receives now a default unique identification using a configured sequence.
  • The module supports now the bank to customer debit/credit notification message (CAMT.054).
  • A report to print a standard form for mandate has been added.
Account Statement
  • It is now possible to order the statement lines and to give them a number. With those features, it is easier to reproduce the same layout of a bank statement.
  • A report for statement has been added. For example, it can be used when using the statement for check deposit.
  • A validation method can be defined on the statement journal. The available methods are: Balance, Amount and Number of Lines. This helps to uses the statement for different purposes like bank statement or check deposit.
Account Stock Continental/Anglo-Saxon
  • The method is now defined on the fiscal year instead of being globally activated on module installation.
Country
  • It is now possible to store zip code per country. A script is provided to load zip codes from GeoNames.
LDAP Authentication
  • The module ldap_connection has been replaced by an entry in the configuration file of trytond.
Party
  • The new zip code from the module country is used to auto-complete zip and city field on address.
Purchase
  • The Confirmed state has been split into Confirmed and Processing, just like the Sale workflow.
Sale Supply Drop Shipment
  • The management of exception on drop shipment is propagated from the sale to the purchase.
New modules
  • The Account Payment Clearing module allows to generate clearing account move when a payment has succeeded between the receivable/payable account to a clearing account. The clearing account will be reconciled later by the statement.
Proteus

Proteus is a library to access Tryton like a client.

  • It is now possible to run reports. It is useful for testing them.
  • A new duplicate method is added which is similar to the copy menu entry of the client.
Categories: FLOSS Project Planets

Acquia: DrupalCon Amsterdam Top Ten – Part 1 of 2 with Kris Vanderwater

Planet Drupal - Mon, 2014-10-20 13:34

Part 1 of 2 – Kris Vanderwater (EclipseGc), Acquia’s Developer Evangelist, and I got together in a Google Hangout to catch up on our impressions of DrupalCon Amsterdam. We prepared a list of our top ten sessions from the Con for you to catch up with at home (technically nine sessions and one “other cool thing”). In our list, there’s a little something for most everyone, from coders, to themers, to site builders, to those of us who pitch sell Drupal to clients – but we would recommend all of these sessions to anyone involved in Drupal. See how the other side lives!

Categories: FLOSS Project Planets

SitePoint PHP Drupal: Drupal 8 Hooks and the Symfony Event Dispatcher

Planet Drupal - Mon, 2014-10-20 12:00

With the incorporation of many Symfony components into Drupal in its 8th version, we are seeing a shift away from many Drupalisms towards more modern PHP architectural decisions. For example, the both loved and hated hook system is getting slowly replaced. Plugins and annotations are taking away much of the need for info hooks and the Symfony Event Dispatcher component is replacing some of the invoked hooks. Although they remain strong in Drupal 8, it’s very possible that with Drupal 9 (or maybe 10) hooks will be completely removed.

In this article we are going to primarily look at how the Symfony Event Dispatcher component works in Drupal. Additionally, we will see also how to invoke and then implement a hook in Drupal 8 to achieve similar goals as with the former.

To follow along or to get quickly started, you can find all the code we work with here in this repository. You can just install the module and you are good to go. The version of Drupal 8 used is the first BETA release so it’s preferable to use that one to ensure compatibility. Alpha 15 should also work just fine. Let’s dive in.

What is the Event Dispatcher component?

A very good definition of the Event Dispatcher component can be found on the Symfony website:

The EventDispatcher component provides tools that allow your application components to communicate with each other by dispatching events and listening to them.

I recommend reading up on that documentation to better understand the principles behind the event dispatcher. You will get a good introduction to how it works in Symfony so we will not cover that here. Rather, we will see an example of how you can use it in Drupal 8.

Continue reading %Drupal 8 Hooks and the Symfony Event Dispatcher%

Categories: FLOSS Project Planets

groups.drupal.org frontpage posts: Let's fix critical Drupal 8 issues together!

Planet Drupal - Mon, 2014-10-20 11:57

Every Friday at noon Pacific (3pm New York, 9pm Berlin, 6am Saturday in Sydney) I will be in #drupal-contribute helping people fix critical issues. I will prepare 2-3 issues with up to date, actionable issue summaries, familiarize myself with the problems and the suggested solution in the issue so that I can answer questions.

If you're someone who has already worked some in the Drupal.org issue queue (so are familiar with patches, coding standards, etc.), even if your experience is not in the core queue, please come by! It's helpful if you know something of Drupal 8 as well, but not necessary.

If you're new to contributing to Drupal in general, you can go to https://www.drupal.org/core-mentoring for a session or two to learn the skills you need to fix critical issues. If you're new to Drupal 8, https://api.drupal.org/api/drupal/8 is a great starting point.

Hope to see you there!

Categories: FLOSS Project Planets

Calvin Spealman: The Problem with Coders' Technology Focus

Planet Python - Mon, 2014-10-20 11:30
Coders focus on code. Coders focus on toolchains and development practices. Coders focus on commits and line counts. Coders focus on code, but we don’t focus as well on people.
We need to take a step back and remember why we write code, or possibly re-evaluate why we write code. Many of us might be doing it for the wrong reasons. Maybe you don’t think there can be a wrong reason, and I’m not entirely sure. What I am certain of is that some reasons to code lend themselves to certain attitudes and weights about the code and other motivations might mandate that you take yourself more or less seriously.
We’re taking the wrong motivations seriously and we’re not giving enough attention and weight to the reasons for code that we should.
The most valid and important reason we can code is not what hackers think it is. A good hack isn’t good for its own sake. No programming language or tool is inherently better than another. The technical merits of the approach or of the individual are not the most important factors to consider.
Our impact on people is the only thing that truly matters.
Twitter isn’t great because they developed amazing distributed services internally to support the load requirements of their service, but because they connect millions of voices across the globe.
RSS isn’t great because it encapsulates content in an easily parseable format for client software to consume, but because it connects writers to the readers who care most about their thoughts and feelings and ideas.
The amazing rendering tools built in-house by the likes of Disney aren’t amazing because of their attention to physical based light simulations and the effort required to coordinate the massive render farms churning out frames for new big budget films, but for their ability to tell wonderful stories that touch people.
The next time you find yourself on a forum chastising someone for writing their website in PHP, pause and ask yourself why that was the more important question to ask them than “Does this fulfill something important to you or your users?”
When you are reviewing code and want to stop a merge because you disagree with a technical approach, take a step back and ask yourself if the changes have a positive impact on the people your product serves.
Every time you find yourself valuing the technical contributions of team mates and community members, make sure those contributions translate into enriching and fulfilling the lives of that community and your workplace, before the technical needs.
Nothing that is important can be so without being important for people first.
Categories: FLOSS Project Planets

Phase2: Simplify Your Logstash Configuration

Planet Drupal - Mon, 2014-10-20 10:16

As I mentioned in my recent post, I got a chance to upgrade the drupal.org ELK stack last week. In doing so, I got to take a look at a Logstash configuration that I created over a year ago, and in the course of doing so, clean up some less-than-optimal configurations based on a year worth of experience and simplify the configuration file a great deal.

The Drupal.org Logging Setup

Drupal.org is served by a large (and growing) number of servers. They all ship their logs to a central logging server for archival, and around a month’s worth are kept in the ELK stack for analysis.

Logs for Varnish, Apache, and syslog are forwarded to a centralized log server for analysis by Logstash. Drupal messages are output to syslog using Drupal core’s syslog module so that logging does not add writes to Drupal.org’s busy database servers. (@TODO: Check if these paths can be published.) Apache logs end up in/var/log/apache_logs/$MACHINE/$VHOST/transfer/$DATE.log, Varnish logs end up in/var/log/varnish_logs/$MACHINE/varnishncsa-$DATE.log and syslog logs end up in /var/log/HOSTS/$MACHINE/$DATE.log. All types of logs get gzipped 1 day after they are closed to save disk space.

Pulling Contextual Smarts From Logs

The Varnish and Apache logs do not contain any content in the logfiles to identify which machine they are from, but the file input sets a path field that can be matched with grok to pull out the machine name from the path and put it into the logsource field, which Grok’s SYSLOGLINE pattern will set when analyzing syslog logs.

Filtering on the logsource field can be quite helpful in the Kibana web UI if a single machine is suspected of behaving weirdly.

Using Grok Overwrite

Consider this snippet from the original version of the Varnish configuration. As I mentioned in my presentation, Varnish logs are nice in that they inclue the HTTP Host header so that you can see exactly which hostname or IP was requested. This makes sense for a daemon like Varnish which does not necessarily have a native concept of virtual hosts (vhosts,) whereas nginx and Apache default to logging by vhost.

Each Logstash configuration snippet shown below assumes that Apache and Varnish logs have already been processed using theCOMBINEDAPACHELOG grok pattern, like so.

filter { if [type] == "varnish" or [type] == "apache" { grok { match =&gt; [ "message", "%{COMBINEDAPACHELOG}" ] } } }

The following snippet was used to normalize Varnish’s request headers to not include https?:// and the Host header so that therequest field in Apache and Varnish logs will be exactly the same and any filtering of web logs can be performed with the vhost andlogsource fields.

filter { if [type] == "varnish" { grok { # Overwrite host for Varnish messages so that it's not always "loghost". match =&gt; [ "path", "/var/log/varnish_logs/%{HOST:logsource}" ] } # Grab the vhost and a "request" that matches Apache from the "request" variable for now. mutate { add_field =&gt; [ "full_request", "%{request}" ] } mutate { remove_field =&gt; "request" } grok { match =&gt; [ "full_request", "https?://%{IPORHOST:vhost}%{GREEDYDATA:request}" ] } mutate { remove_field =&gt; "full_request" } } }

As written, this snippet copies the request field into a new field called full_request and then unsets the original request field and then uses a grok filter to parse both the vhost and request fields out of that synthesized full_request field. Finally, it deletesfull_request.

The original approach works, but it takes a number of step and mutations to work. The grok filter has a parameter calledoverwrite that allows this configuration stanza to be considerably simlified. The overwrite paramter accepts an array of values thatgrok should overwrite if it finds matches. By using overwrite, I was able to remove all of the mutate filters from my configuration, and the enture thing now looks like the following.

filter { if [type] == "varnish" { grok { # Overwrite host for Varnish messages so that it's not always "loghost". # Grab the vhost and a "request" that matches Apache from the "request" variable for now. match =&gt; { "path" =&gt; "/var/log/varnish_logs/%{HOST:logsource}" "request" =&gt; "https?://%{IPORHOST:vhost}%{GREEDYDATA:request}" } overwrite =&gt; [ "request" ] } } }

Much simpler, isn’t it? 2 grok filters and 3 mutate filters have been combined into a single grok filter with two matching patterns and a single field that it can overwrite. Also note that this version of the configuration passes a hash into the grok filter. Every example I’ve seen just passes an array to grok, but the documentation for the grok filter states that it takes a hash, and this works fine.

Ensuring Field Types

Recent versions of Kibana have also gotten the useful ability to do statistics calculations on the current working dataset. So for example, you can have Kibana display the mean number of bytes sent or the standard deviation of backend response times (if you are capturing them – see my DrupalCon Amsterdam slides for more information on how to do this and how to normalize it between Apache, nginx, and Varnish.) Then, if you filter down to all requests for a single vhost or a set of paths, the statistics will update.

Kibana will only show this option for numerical fields, however, and by default any data that has been parsed with a grok filter will be a string. Converting string fields to other types is a much better use of the mutate filter. Here is an example of converting the bytes and the response code to integers using a mutate filer.

@TODO: Test that hash syntax works here!

filter { if [type] == "varnish" or [type] == "apache" { mutate { convert =&gt; { [ "bytes", "response" ] =&gt; "integer", } } } }

Lessons Learned

Logstash is a very powerful tool, and small things like the grokoverwrite parameter and the mutate convert parameter can help make your log processing configuration simpler and result in more usefulness out of your ELK cluster. Check out Chris Johnson’s post about adding MySQL Slow Query Logs to Logstash!

If you have any other useful Logstash tips and tricks, leave them in the comments!

 

Categories: FLOSS Project Planets

Libinput integration in KWin/Wayland

Planet KDE - Mon, 2014-10-20 09:27

Today I pushed my outstanding branch to get libinput support into kwin_wayland. Libinput is a very important part for the work to get a full Wayland session in Plasma which means we reached a very important milestone. As the name suggests it allows us to process input events directly. KWin needs to forward the input events to the currently active application(s) and also interpret them before any other application gets them. E.g. if there is a global shortcut KWin should intercept it and not send it to an application.

Why libinput integration in KWin?

KWin/Wayland already supported input handling by being a Wayland client and connecting to a Seat. But especially for pointer events this was not sufficient at all. We have quite some code where we warp the pointer and Wayland doesn’t support this (and shouldn’t). Warping the pointer is normally considered evil as it can introduce quite some problems if applications are allowed to warp the pointer. E.g. it can create security issues if you start typing your password and a malicious applications warps the pointer to trick you entering your password into a password field of the malicious application. Also from a usabililty perspective it can be problematic as it makes the system behave in an unpredictable way.

On the other hand if the application is a window manager/compositor the need for warping cursors arises. For example the screen edge handling pushes the cursor slightly back which needs cursor warping. Or you can move a window with the cursor keys (hold Control key to have very precise moving) and in these cases we need to warp the pointer. With libinput this is possible again as KWin gets put in control of the input events directly. It completely bypasses the currently used Wayland compositor.

Libinput is also an important piece in the puzzle for a full Wayland session which does not rely on another Wayland compositor. So far KWin/Wayland can only be used in a nested scenario – which is important for development and interesting new possibilities like the idea for a SoK project – but we also want full support without the need for a Wayland session. This means we need to handle input (which libinput does) and need to interact with DRM directly. DRM support is still missing. This could be an interesting GSoC project next year

The merged implementation does not support all of libinput yet. Most important touch screen support is omitted as I don’t have a touch enabled device. I plan to sit down with fellow KDE developers who have a touchscreen enabled device and implement that part together. Also I will contact the VDG to define some global touch gestures to interact with the system (I’m quite interested in having a touch gesture to activate Present Windows). There’s lots of work to be done here and I would welcome any helping hand.

Security challenges

Processing input events directly comes with a slight problem, though: one needs to be root to read the events. And that’s obviously an absolute no-go for KWin. KWin may never ever be executed with root privileges and also not an suid which drops privileges again (which also wouldn’t help in that case but so what). The application has not been designed for running as root. The same is the case for Weston and obviously I looked at how it’s solved there and there is a very neat solution to support the use case we have in logind. The session controller can ask logind to open devices and logind provides a file descriptor to the opened device. In addition logind automatically takes care to close the file descriptors when a VT switch happens, which is extremely convenient for the use cases of Wayland compositors. So obviously I went for this solution as all it needs is connecting to very few D-Bus calls. This means the libinput integration in kwin_wayland will have a runtime dependency to a logind D-Bus interface. Of course this does not affect kwin_x11, neither does it affect kwin_wayland without libinput integration, but of course without libinput you won’t get support for all features. There is one caveat though: logind will blank the VT when the session controller goes away. So make sure to not run kwin_wayland with libinput support on your primary X session. Because of that libinput support must be explicitly enabled with the –libinput command line switch of kwin_wayland.

Current state and downsides of libinput and logind

As libinput does not yet have a stable release, the dependency is still optional and it’s possible to build kwin_wayland without libinput support. This is currently very important for the non-Linux operating systems, which might want to provide kwin_wayland, as libinput only supports Linux.

I hope that libinput will become available on other platforms. At XDC during the BSD presentations I heard at least one presenter touch the topic. So I’m optimistic that in the long run this will happen as we also see that DRM and KMS is nowadays in quite a good shape on the BSDs. For KWin development it’s of course important that we have only one library to interact with. Otherwise it means platform dependent code which is hard to develop and extremely difficult to test for the main developers not using such a platform. So if you want to get kwin_wayland on non-Linux, please consider putting the energy into getting libinput working (challenge is udev) as that will help all systems and not just KWin. After all we want to stand on the shoulders of giants

Logind is in a similar situation. It is developed as a component in systemd, which isn’t available on all systems which run KWin. Luckily we don’t depend on logind directly but only use a subset of a well defined D-Bus interface and that interface can be provided by other tools as well. Something like that is already being worked on for the BSD’s.
Like with libinput, I would much prefer to keep KWin lean and efficient and not complicate the code base and development by including libraries for specific platforms or having security relevant code around. As written above: using suid wrappers is very much a no-no to me. But of course it would be possible to implement the subset of the D-Bus in an independent project and provide it. KWin would happily use it, it just needs someone to write the code. So if enough people care, I’m quite sure that there will be a developer stepping up and writing the code.

I decided to put out a small FAQ here for those who have questions about the implications of the above:

FAQ Does that mean KWin (and Plasma) depend on systemd?

No.

But it depends on logind?

No. It uses one D-Bus interface provided by logind. It doesn’t care which program is providing this D-Bus interface. It can be logind or logind-shim or the implementation being worked on for the BSDs. Even a small binary just providing the used D-Bus interfaces would work.

You should not use logind, there must be a different solution!

I’m sorry I did not find any solution which was as efficient and secure as the one provided by logind. Of course there are solutions like weston-launch, but they introduce a lot of complexity – both on the coding side and on the installation side. As such a solution would need to be suid, I’m very reluctant to the idea. We shouldn’t introduce such possible security risks, if there are better solutions available. Logind is simply providing a feature which is needed by kwin_wayland.

Does that affect KWin on X11?

No, that only affects kwin_wayland.

But there is no logind for the BSDs! So I won’t be able to run kwin_wayland on BSD systems?

Unfortunately the fact that logind is missing is least of your problems on BSD. Logind support is only needed for libinput which right now is not available on BSD. The kwin_wayland binary on BSD will not try to interact with logind. I’m sorry I don’t have a solution for the input stack on BSDs. I really hope the BSD developers can come up with a solution for this as we don’t have the resources to build a separate input solution for one platform.

How can I change KWin to not use logind?

As I noted, it is important to me that KWin is secure and that the code base is as easy to understand as possible. I don’t like the idea of having ifdefs all over the place and multiple solutions as that results in bitrot. When I pushed the libinput change it directly failed to build on the CI system as the ifdefs introduced a variation which I couldn’t test on my system. Each ifdef and each platform-specific solution increases the development and maintenance costs significantly. This means that I will only accept patches which don’t introduce the above mentioned problems. Preferrable a small wrapper binary could provide the needed D-Bus interface for KWin and other applications which need this functionality. This would not need changes in KWin at all and would be from my perspective the perfect solution.

Why won’t you implement such a wrapper binary?

Honestly there are a million things I would do if I had time, but a day has only 24 h and I have to prioritize my work. Just check the Wayland TODO list for what we all need to do to get KWin/Wayland ready. Why don’t you open your editor and get some work done?

But if KWin uses logind, Slackware will drop all of KDE!

Yes, I have read that (see comments). Luckily the situation for Slackware is similar to the BSDs: it doesn’t matter right now. Slackware doesn’t provide Wayland packages yet, so the logind support won’t be used as there is no kwin_wayland binary which could be built. And if enough people care, as I said, one or more of them can write the wrapper binary and KWin and other compositors will work just fine.

How can i help?

Best by writing code See the TODO list I linked in an above answer. Also it would be good if someone documented the steps to get kwin_wayland running and how to develop on it cough.

Categories: FLOSS Project Planets

Michal &#268;iha&#345;: Hosted Weblate has new UI

Planet Debian - Mon, 2014-10-20 09:00

The biggest part of this HackWeek will be spent on Weblate. The major task is to complete new UI for it. There have been already some blog posts about that here, so regular readers of my blog already know it is using Twitter Bootstrap.

Today it has reached point where I think it's good enough for wider testing and I've deployed it at Hosted Weblate (see Weblate website for conditions for getting hosting there).

I expect there will be some rough edges, so don't hesitate to report any issues, so that I can quickly fix them.

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Facundo Batista

Planet Python - Mon, 2014-10-20 08:30

This week we have Facundo Batista (@facundobatista) joining us.

He is a Python Core developer from Argentina. If you happen to speak Spanish, then you might enjoy his blog. Let’s spend some time getting to know Facundo!

Can you tell us a little about yourself (hobbies, education, etc):

I’m a specialist in the Python programming language. With an experience
in it of more than 8 years, I’m Core Developer of the language, and
member by merit of the Python Software Foundation. Also, received the
2009 Community Service Award for organizing PyCon Argentina and the
Argentinian Python community as well as contributions to the standard
library and work in translating the Python documentation.

I gave talks in the main Python conferences in Argentina and other
countries (United States and Europe). In general, I have a strong
experience in distributed collaborative experience, being involved in
FLOSS development, working with people around the globe, for more than
10 years.

Worked as Telecommunication Engineer in Movistar and Ericsson, and as
Python expert in Cyclelogic (Developer in Chief) and Canonical
(Technical Leader, current position).

Also love playing tennis, have a one year kid that is a wonderful little
person, and enjoy taking photos.

Why did you start using Python?

I needed to process some logs server-side, when I was working in
Movistar ~14 years ago.

Servers were running SunOS (!). I knew C and other languages not really
suited to do that task. I learned and used Perl for some months, until I
found Python and fell in love.

What other programming languages do you know and which is your favorite?

I have experience and worked with (although I won’t be able to use them
nowadays without some re-learning) COBOL, Clipper, Basic, C, C++, Java
and Perl.

My favourite of course is Python

What projects are you working on now?

I’m actively working on three projects:

  • CDPedia: it’s a way to compress and build the whole Wikipedia to be used offline. The output can be a CD, DVD, or just a tarball, that automatically runs on Linux, Mac, or Windows, without needing anything else installed. It aims to be a source of information for schools/people that still don’t have internet access. Currently we’re packaging only the Spanish wikipedia, but we’re almost ready to start with other languages
  • Encuentro: it’s a desktop program to select and download a lot of educational documentaries from Argentine public television (which is really awesome these days). The site and program itself are in Spanish, as the TV episodes are only in that language.
  • Linkode: Linkode is the useful pastebin! It’s a kind of short living collaboration space, a dynamic pastebin. Some awesome details:

    You can create linkodes anywhere, whenever, and effortlessly.
    Editable texts, not static!
    Every new edition creates a child: you have a tree
    Code/text type autodetection (and coloring!)
    Permanent linkodes (but still the owner can remove them)
    Absolutely anonymous (unless you login, which is dead simple)
    Private URLs: because you can not guess UUIDs

Which Python libraries are your favorite (core or 3rd party)?

I really love the itertools core lib. And of course the decimal one,
that I wrote ;).

Regarding external libs, I’m a fan of Twisted, and these days I use a
lot BeautifulSoup.

Is there anything else you’d like to say?

Thanks for the interview!

Previous PyDevs of the Week

Categories: FLOSS Project Planets

Nick Clifton: October 2014 GNU Toolchain Update

GNU Planet! - Mon, 2014-10-20 07:43
In this month's news we have:
  
  * GDB now supports hardware watchpoints on x86 GNU Hurd.

  * GDB has a new command:

       queue-signal <signal-name-or-number>

    This queues a signal to be delivered to the thread when it is resumed.

  * GCC supports a new variable attribute:

     __attribute__((io (<addr>)))

    This specifies that the variable is used to address a memory mapped peripheral.  If an address is specified the variable is always assigned to that address.  For example:

     volatile int porta __attribute__((io (0x22)));

    Even without an address assigned to it, a variable with this attribute will always be accessed using in/out instructions if supported by the target hardware.

    There are two variations on this attribute:

      __attribute__((io_low <addr>)
      __attribute__((address <addr>)


    These are like the "io" attribute except that they additionally inform the compiler that the variable falls within the lower half of the I/O area (for "io_low") or outside the I/O area (for "address"), which may make a difference to the instructions generated to access the variable.

  
  * GCC's sanitizer has a couple of new options:

     -fsanitize=object-size

    This option enables instrumentation of memory references using the __builtin_object_size function.  Various out of bounds pointer accesses can be detected in this way.

     -fsanitize=bool

    This option enables instrumentation of loads from bool.  If a value other than 0/1 is loaded, a run-time error is issued.

      -fsanitize=enum

    This option enables instrumentation of loads from an enum type.  If a value outside the range of values for the enum type is loaded, a run-time error is issued.


  * The inter-procedural analysis pass now supports a new optimization:
  
     -fipa-icf
      -fipa-icf-functions
      -fipa-icf-variables

    
    This performs identical code folding for functions and/or read-only variables.  The optimization reduces code size, but it may disturb unwind stacks by replacing a function by an equivalent one with a different name.

    The optimization works more effectively with link time optimization enabled.  The optimization is similar to the ICF optimization performed by the GOLD linker, but it works at a different level and it may find equivalences that GOLD misses.


  * The AArch64 target now supports a workaround for ARM Cortex-A53 erratum number 835769:

      -mfix-cortex-a53-835769

    When enabled it inserts a NOP instruction between memory instructions and 64-bit integer multiply-accumulate instructions.

Cheers
  Nick
Categories: FLOSS Project Planets

Forthcoming Kubuntu Interviews

Planet KDE - Mon, 2014-10-20 07:36

Kubuntu 14.10 is due out this week bringing a choice of rock solid Plasma 4 or the tech preview of Kubuntu Plasma 5.  The team has a couple of interviews lined up to talk about this.

At 21:00UTC tomorrow (Tuesday) Valorie will be talking with Jupiter Broadcasting’s Linux Unplugged about what’s new and what’s cool.
Watch it live 21:00UTC Tuesday or watch it recorded.

Then on Thursday just fresh from 14.10 being released into the wild me and Scarlett will be on the AtRandom video podcast starting at 20:30UTC.Watch it live 20:30UTC Thursday or watch it recorded.

And feel free to send in questions to either if there is anything you want to know.

 

Categories: FLOSS Project Planets

Bluespark Labs: Uninstalling and purging field modules all at once

Planet Drupal - Mon, 2014-10-20 07:14

Sometimes we want to uninstall a module from our Drupal site but we can't do it because we get this dependency: "Required by: Drupal (Field type(s) in use - see Field list)". Even if you delete the fields provided by the module via the UI or programmatically by executing field_delete_field() function you will get a new dependency "Required by: Drupal (Fields pending deletion)".

These dependencies are created by Drupal core to avoid that a module is uninstalled until all the data related to its fields is removed from the database, in order to maintain consistency.

This has several drawbacks, the first one being that you can't uninstall your module when you want, and you have to wait until all the field data values are removed from the database (The rather strangely named field_deleted_data_XX and field_deleted_revision_XX tables) and the meta-information stored in field_config and field_config_instance tables is removed. And most importantly, nobody actually knows when this is going to happen! These database rows are removed in batches on each cron task execution. So depending on our cron regularity and the amount of data stored in our field tables, this tasks can last for minutes to weeks.

This is a problem because, naturally, we want to uninstall our module now and not be forced to check periodically our production database to see if we are allowed to uninstall the module once all that information has been removed from the database.

To avoid such situations and regain control, you can perform all these tasks in a hook_update_N() function, forcing the deletion of all the information and finally uninstalling the module. You can check the code in the gist below:

The job is divided in three parts: The data definition, field data purge and module list clean.

In the data definition task we provide all the required data we need to perform the task, the name of the field to delete, and given that information, we get the field_info array and the name of the module to be uninstalled. Finally, field_delete_field() is executed.

After that the field data is purged in the batch body, and since we don't know how much data we will have to purge, we remove just 100 database rows per batch execution. After each purge we check if all the data has been removed to decide if we have to remove more data from the database or continue to the final part.

Once all the data and metadata related to the module is removed from the database, the Drupal field types dependency is gone and we are granted the ability to disable and uninstall our module cleanly. Finally, we can drop the empty field_deleted_data_XX and field_deleted_revision_XX tables to keep clean our database.

Using this approach, we have two key benefits: a. we are sure that the module is disabled and our database is clean, and b. we are confident that we can remove the module from our repository, given that in the next deploy we won't get any dependency conflict with that module.

Tags: Drupal Planet
Categories: FLOSS Project Planets

Visitors Voice: That is why we sponsor the Search API Solr module

Planet Drupal - Mon, 2014-10-20 06:03
Since june 2014 we sponsor the Search API Solr module. There are no strings attached, and we sponsor the maintainer Thomas Seidl a.k.a Drunken Monkey with a couple of hours every month that he can spend as he likes. It could be bug fixing, features asked for or working on the Drupal 8 version. We […]
Categories: FLOSS Project Planets
Syndicate content