FLOSS Project Planets
I used python 2.7 version.
[root@localhost mythcat]# dnf install opencv-python.x86_64
Last metadata expiration check: 0:21:12 ago on Sat Feb 25 23:26:59 2017.
Package Arch Version Repository Size
opencv x86_64 3.1.0-8.fc25 fedora 1.8 M
opencv-python x86_64 3.1.0-8.fc25 fedora 376 k
python2-nose noarch 1.3.7-11.fc25 updates 266 k
python2-numpy x86_64 1:1.11.2-1.fc25 fedora 3.2 M
Install 4 Packages
Total download size: 5.6 M
Installed size: 29 M
Is this ok [y/N]: y
(1/4): opencv-python-3.1.0-8.fc25.x86_64.rpm 855 kB/s | 376 kB 00:00
(2/4): opencv-3.1.0-8.fc25.x86_64.rpm 1.9 MB/s | 1.8 MB 00:00
(3/4): python2-nose-1.3.7-11.fc25.noarch.rpm 543 kB/s | 266 kB 00:00
(4/4): python2-numpy-1.11.2-1.fc25.x86_64.rpm 2.8 MB/s | 3.2 MB 00:01
Total 1.8 MB/s | 5.6 MB 00:03
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Installing : python2-nose-1.3.7-11.fc25.noarch 1/4
Installing : python2-numpy-1:1.11.2-1.fc25.x86_64 2/4
Installing : opencv-3.1.0-8.fc25.x86_64 3/4
Installing : opencv-python-3.1.0-8.fc25.x86_64 4/4
Verifying : opencv-python-3.1.0-8.fc25.x86_64 1/4
Verifying : opencv-3.1.0-8.fc25.x86_64 2/4
Verifying : python2-numpy-1:1.11.2-1.fc25.x86_64 3/4
Verifying : python2-nose-1.3.7-11.fc25.noarch 4/4
opencv.x86_64 3.1.0-8.fc25 opencv-python.x86_64 3.1.0-8.fc25
python2-nose.noarch 1.3.7-11.fc25 python2-numpy.x86_64 1:1.11.2-1.fc25
[root@localhost mythcat]# This is my test script with opencv to detect flow using Lucas-Kanade Optical Flow function.
This tracks some points in a black and white video.
First you need:
- one black and white video;
- not mp4 file type file;
- the color args need to be under 4 ( see is 3);
- I used this video:
I used cv2.goodFeaturesToTrack().
We take the first frame, detect some Shi-Tomasi corner points in it, then we iteratively track those points using Lucas-Kanade optical flow.
The function cv2.calcOpticalFlowPyrLK() we pass the previous frame, previous points and next frame.
The returns next points along with some status numbers which has a value of 1 if next point is found, else zero.
That iteratively pass these next points as previous points in next step.
See the code below:
import numpy as np
cap = cv2.VideoCapture('candle')
# params for ShiTomasi corner detection
feature_params = dict( maxCorners = 77,
qualityLevel = 0.3,
minDistance = 7,
blockSize = 7 )
# Parameters for lucas kanade optical flow
lk_params = dict( winSize = (17,17),
maxLevel = 1,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
# Create some random colors
color = np.random.randint(0,255,(100,3))
# Take first frame and find corners in it
ret, old_frame = cap.read()
old_gray = cv2.cvtColor(old_frame, cv2.COLOR_BGR2GRAY)
p0 = cv2.goodFeaturesToTrack(old_gray, mask = None, **feature_params)
# Create a mask image for drawing purposes
mask = np.zeros_like(old_frame)
ret,frame = cap.read()
frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# calculate optical flow
p1, st, err = cv2.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params)
# Select good points
good_new = p1[st==1]
good_old = p0[st==1]
# draw the tracks
for i,(new,old) in enumerate(zip(good_new,good_old)):
a,b = new.ravel()
c,d = old.ravel()
mask = cv2.line(mask, (a,b),(c,d), color[i].tolist(), 2)
frame = cv2.circle(frame,(a,b),5,color[i].tolist(),-1)
img = cv2.add(frame,mask)
k = cv2.waitKey(30) & 0xff
if k == 27:
# Now update the previous frame and previous points
old_gray = frame_gray.copy()
p0 = good_new.reshape(-1,1,2)
cap.release()The output of this file is:
systemd 233 is scheduled to be released next week, and there is only a handful of small issues left. As usual there are tons of improvements and fixes, but the most intrusive one probably is another attempt to move from legacy cgroup v1 to a “hybrid” setup where the new unified (cgroup v2) hierarchy is mounted at /sys/fs/cgroup/unified/ and the legacy one stays at /sys/fs/cgroup/ as usual. This should provide an easier path for software like Docker or LXC to migrate to the unified hiearchy, but even that hybrid mode broke some bits.
While systemd 233 will not make it into Debian stretch or Ubuntu zesty, as both are in feature freeze, it will soon be available in Debian experimental, and in the next Ubuntu release after 17.04 gets released. Thus now is another good time to give this some thorough testing!
To help with this, please give the PPA with builds from upstream master a spin. In addition to the usual packages for Ubuntu 16.10 I also uploaded a build for Ubuntu zesty, and a build for Debian stretch (aka testing) which also works on Debian sid. You can use that URL as an apt source:deb [trusted=yes] https://people.debian.org/~mpitt/tmp/systemd-master-20170225/ /
Thank you, and happy booting!
Developing a small web application I recently had reason to upgrade from Python 3.4 to Python 3.6. The reason for the upgrade was regarding ordering of keyword arguments and not related to the bug in my test code that I then found. I should have been more careful writing my test code in the first place, so I am writing this down as some penance for not testing my tests robustly enough.A Simple example program
So here I have a version of the problem reduced down to the minimum required to demonstrate the issue:import unittest.mock as mock class MyClass(object): def __init__(self): pass def my_method(self): pass if __name__ == '__main__': with mock.patch('__main__.MyClass') as MockMyClass: MyClass().my_method() MockMyClass.my_method.assert_called_once()
Of course in reality the line MyClass().my_method() was some test code that indirectly caused the target method to be called.Output in Python 3.4 $ python3.4 mock_example.py $
No output, leading me to believe my assertions passed, so I was happy that my code and my tests were working. As it turned out, my code was fine but my test was faulty. Here's the output in two later versions of Python of the exact same program given above.Output in Python 3.5 $ python3.5 mock_example.py Traceback (most recent call last): File "mock_example.py", line 12, in <module> MockMyClass.my_method.assert_called_once() File "/usr/lib/python3.5/unittest/mock.py", line 583, in __getattr__ raise AttributeError(name) AttributeError: assert_called_once
Assertion error, test failing.Output in Python 3.6 $ python3.6 mock_example.py Traceback (most recent call last): File "mock_example.py", line 12, in <module> MockMyClass.my_method.assert_called_once() File "/usr/lib/python3.6/unittest/mock.py", line 795, in assert_called_once raise AssertionError(msg) AssertionError: Expected 'my_method' to have been called once. Called 0 times.
Test also failing with a different error message. Anyone who is (pretty) familiar with the unittest.mock standard library module will know assert_called_once was introduced in version 3.6, which is my version 3.5 is failing with an attribute error.My test was wrong
The problem was, my original test was not testing anything at all. The 3.4 version of the unittest.mock standard library module did not have a assert_called_once. The mock, just allows you to call any method on it, to see this you can try changing the line:MockMyClass.my_method.assert_called_once()
With python3.4, python3.5, and python3.6 this yields no error. So in the original program you can avoid the calling MyClass.my_method at all:if __name__ == '__main__': with mock.patch('__main__.MyClass') as MockMyClass: # Missing call to `MyClass().my_method()` MockMyClass.my_method.assert_called_once() # In 3.4 this still passes.
This does not change the (original) results, python3.4 still raises no error, whereas python3.5 and python3.6 are raising the original errors.
So although my code turned out to be correct (at least in as much as the desired method was called), had it been faulty (or changed to be faulty) my test would not have complained.The Actual Problem
My mock was wrong. I should instead have been patching the actual method within the class, like so:if __name__ == '__main__': with mock.patch('__main__.MyClass.my_method') as mock_my_method: MyClass().my_method() mock_my_method.assert_called_once()
Now if we try this in all version 3.4, 3.5, and 3.6 of python we get:$ python3.4 mock_example.py $ python3.5 mock_example.py Traceback (most recent call last): File "mock_example.py", line 12, in <module> mock_my_method.assert_called_once() File "/usr/lib/python3.5/unittest/mock.py", line 583, in __getattr__ raise AttributeError(name) AttributeError: assert_called_once $ python3.6 mock_example.py $
So Python 3.4 and 3.6 pass as we expect. But Python3.5 gives an error stating that there is no assert_called_once method on the mock object, which is true since that method was not added until version 3.6. This is arguably what Python3.4 should have done.
It remains to check that the updated test fails in Python3.6, so we comment out the call to MyClass().my_method:$ python3.6 mock_example.py Traceback (most recent call last): File "mock_example.py", line 12, in <module> mock_my_method.assert_called_once() File "/usr/lib/python3.6/unittest/mock.py", line 795, in assert_called_once raise AssertionError(msg) AssertionError: Expected 'my_method' to have been called once. Called 0 times.
This is the test I should have performed with my original test. Had I done this I would have seen that the test passed in Python3.4 regardless of whether the method in question was actually called or not.
So now my test works in python3.6, fails in python3.5 because I'm using the method assert_called_once which was introduced in python3.6. Unfortunately it incorrectly passes in python3.4. So if I want my code to work properly for python versions earlier than 3.6, then I can essentially implement assert_called_once() with assert len(mock_my_method.mock_calls) == 1. If we do this then my test passes in all three version of python and fails in all three if we comment out the call MyClass().my_method().Conclusions
I made an error in writing my original test, but my real sin was that I was a bit lazy in that I did not make sure that my tests would fail, when the code was incorrect. In this instance there was no problem with the code only the test, but that was luck. So for me, this served as a reminder to check that your tests can fail. It may be that mutation testing would have caught this error.
Django Weekly: Django Weekly 27 - Advanced Django querying, Django Signals, Elasticbeanstalk and more
Advanced Django querying: sorting events by dateApplication of Django's Case, When queryset operators for sorting events by date.
Find Top DevelopersWe help companies like Airbnb, Pfizer, and Artsy find great developers. Let us find your next great hire. Get started today.
How to test Django Signals like a pro?Django Signals are extremely useful for decoupling modules. They allow a low-level Django app to send events for other apps to handle without creating a direct dependency. Signals are easy to set up, but harder to test. So In this article, I’m going to walk you through implementing a context manager for testing Django signals, step by step.
Django 1.11 beta 1 releasedHere are the release notes - https://docs.djangoproject.com/en/dev/releases/1.11/
Weekly Python Chat: Django FormsKenneth Love and Trey Hunner will answer your questions about how we use Django's forms.
Examples of Django projects deployed through OS packaging systems? ( Reddit Discussion )deployment
Different types of testing in DjangoWhat to test and where to test is a common question I get asked. In this video learn about different types of tests you can write in the context of Django.
DjangoCon US 2017 Update: Call for Proposals, Mentorship, and Financial Aid Are Open!In case you missed the news, DjangoCon US 2017 will take place in beautiful Spokane, Washington, from August 13-18, 2017.
How to Create User Sign Up View?In this tutorial I will cover a few strategies to create Django user sign up/registration. Usually I implement it from scratch. You will see it’s very straightforward.
How we extended django CMS to create a single-page application?Run your django CMS project as a single-page application (SPA) with vue.js and vue-router.
5 Gotchas with Elastic Beanstalk and Djangoelastic beanstalk
drf-writable-nested - 11 Stars, 0 ForkWritable nested model serializer for Django REST Framework.
psychok7/django-celery-inspect - 8 Stars, 1 ForkDjango reusable app that uses Celery Inspect command to monitor workers/tasks via the Django REST Framework
djeasy - 5 Stars, 3 ForkDjango simple quick setup
In this tutorial you’ll learn how to set up Python and the Pip package manager on Windows 10, completely from scratch.Step 1: Download the Python Installer
The best way to install Python on Windows is by downloading the official Python installer from the Python website at python.org.
To do so, open a browser and navigate to https://python.org/. After the page has finished loading, click Downloads.
- The website should detect that you’re on Windows and offer you to download the latest version of Python 3 or Python 2. If you don’t know which version of Python to use then I recommend Python 3. If you know you’ll need to work with legacy Python 2 code only then should you pick Python 2.
Under Downloads → Download for Windows, click the “Python 3.X.X” (or “Python 2.X.X”) button to begin downloading the installer.Sidebar: 64-bit Python vs 32-bit Python
If you’re wondering whether you should use a 32-bit or a 64-bit version of Python then you might want to go with the 32-bit version.
It’s sometimes still problematic to find binary extensions for 64-bit Python on Windows, which means that some third-party modules might not install correctly with a 64-bit version of Python.
My thinking is that it’s best to go with the version currently recommended on python.org. If you click the Python 3 or Python 2 button under “Download for Windows” you’ll get just that.
Remember that if you get this choice wrong and you’d like to switch to another version of Python you can just uninstall Python and then re-install it by downloading another installer from python.org.Step 2: Run the Python Installer
Once the Python installer file has finished downloading, launch it by double-clicking on it in order to begin the installation.
Be sure to select the Add Python X.Y to PATH checkbox in the setup wizard.
- Please make sure the “Add Python X.Y to PATH” checkbox was enabled in the installer because otherwise you will have problems accessing your Python installation from the command line. If you accidentally installed Python without checking the box, follow this tutorial to add python.exe to your system PATH.
Click Install Now to begin the installation process. The installation should finish quickly and then Python will be ready to go on your system. We’re going to make sure everything was set up correctly in the next step.Step 3: Verify Python Was Installed Correctly
After the Python installer finished its work Python should be installed on your system. Let’s make sure everything went correctly by testing if Python can be accessed from the Windows Command Prompt:
- Open the Windows Command Prompt by launching cmd.exe
- Type pip and hit Return
- You should see the help text from Python’s “pip” package manager. If you get an error message running pip go through the Python install steps again to make sure you have a working Python installation. Most issues you will encounter here will have something to do with the PATH not being set correctly. Re-installing and making sure that the “Add Python to PATH” option is enabled in the installer should resolve this.
Assuming everything went well and you saw the output from Pip in your command prompt window—Congratulations, you just installed Python on your system!
Wondering where to go from here? Click here to get some pointers for Python beginners.
Data visualization and storytelling with your data are essential skills that every data scientist needs to communicate insights gained from analyses effectively to any audience out there.
For most beginners, the first package that they use to get in touch with data visualization and storytelling is, naturally, Matplotlib: it is a Python 2D plotting library that enables users to make publication-quality figures. But, what might be even more convincing is the fact that other packages, such as Pandas, intend to build more plotting integration with Matplotlib as time goes on.
However, what might slow down beginners is the fact that this package is pretty extensive. There is so much that you can do with it and it might be hard to still keep a structure when you're learning how to work with Matplotlib.
DataCamp has created a Matplotlib cheat sheet for those who might already know how to use the package to their advantage to make beautiful plots in Python, but that still want to keep a one-page reference handy. Of course, for those who don't know how to work with Matplotlib, this might be the extra push be convinced and to finally get started with data visualization in Python.
(By the way, if you want to get started with this Python package, you might want to consider our Matplotlib tutorial.)
You'll see that this cheat sheet presents you with the six basic steps that you can go through to make beautiful plots.
Check out the infographic by clicking on the button below:
With this handy reference, you'll familiarize yourself in no time with the basics of Matplotlib: you'll learn how you can prepare your data, create a new plot, use some basic plotting routines to your advantage, add customizations to your plots, and save, show and close the plots that you make.
What might have looked difficult before will definitely be more clear once you start using this cheat sheet!
Between brackets: [question score / answers count]
Build date: 2017-02-24 19:53:04 GMT
- Why is x**4.0 faster than x**4 in Python 3? - [119/2]
- How to limit the size of a comprehension? - [20/4]
- How to properly split this list of strings? - [13/5]
- Why is the __dict__ of instances so small in Python 3? - [11/1]
- Python Asynchronous Comprehensions - how do they work? - [10/2]
- Immutability in Python - [10/2]
- Order-invariant hash in Python - [9/2]
- Assign a number to each unique value in a list - [8/7]
- How to add a shared x-label and y-label to a plot created with pandas' plot? - [8/3]
- What happens when I inherit from an instance instead of a class in Python? - [8/2]
Very strange. Verrrry strange.
Yesterday I wrote a blog post on spam stuff that has been hitting my mailbox. Nothing too deep, just me scratching my head.
Coincidentally (I guess/hope), I have been getting messages via my Bitlbee to one of my Jabber accounts, offering me ransomware services. I am reproducing it here, omitting of course everything I can recognize as their brand names related URLs (as I'm not going to promote the 3vi1-doers). I'm reproducing this whole as I'm sure the information will be interesting for some.*BRAND* Ransomware - The Most Advanced and Customisable you've Ever Seen Conquer your Independence with *BRAND* Ransomware Full Lifetime License! * UNIQUE FEATURES * NO DEPENDENCIES (.net or whatever)!!! * Edit file Icon and UAC - Works on All Windows Versions * Set Folders and Extensions to Encrypt, Deadline and Russian Roulette * Edit the Text, speak with voice (multilang) and Colors for Ransom Window * Enable/disable USB infect, network spread & file melt * Set Process Name, sleep time, update ransom amount, Give mercy button * Full-featured headquarter (for Windows) with unlimited builds, PDF reports, charts and maps, totally autonomous operation * PHP Bridges instead of expensive C&C servers! * Automatic Bitcoin payment detection (impossible to bypass/crack - we challege who says the contrary to prove what they say!) * Totally/Mathematically IMPOSSIBLE to DECRYPT! Period. * Award-Winning Five-Stars support and constant updates! * We Have lot vouchs in *BRAND* Market, can check! Watch the promo video: *URL* Screenshots: *URL* Website: *URL* Price: $389 Promo: just $309 - 20% OFF! until 25th Feb 2017 Jabber: *JID*
I think I can comment on this with my students. Hopefully, this is interesting to others.
Now... I had never received Jabber-spam before. This message has been sent to me 14 times in the last 24 hours (all from different JIDs, all unknown to me). I hope this does not last forever :-/ Otherwise, I will have to learn more on how to configure Bitlbee to ignore contacts not known to me. Grrr...
One of the problems with Drupal distributions is that they, by nature, contain an installation profile — and Drupal sites can only have one profile. That means that consumers of a distribution give up the ability to easily customize the out of the box experience.
This was fine when profiles were first conceived. The original goal was to provide “ready-made downloadable packages with their own focus and vision”. The out of the box experience was customized by the profile, and then the app was built on top of that starting point. But customizing the out of the box experience is no longer reserved for those of us that create distributions for others to use as a starting point. It’s become a critical part of testing and continuous integration. Everyone involved in a project, including the CI server, needs a way to reliably and quickly build the application from a single command. Predictably, developers have looked to the installation profile to handle this.
This practice has become so ubiquitous, that I recently saw a senior architect refer to it as “the normal Drupal paradigm of each project having their own install profile”. Clearly, if distributions want to be a part of the modern Drupal landscape, they need to solve the problem of profiles.Old Approach
In July 2016, Lightning introduced lightning.extend.yml which enabled site builders to:
- Install additional modules after Lightning had finished its installation
- Exclude certain Lightning components
- Redirect users to a custom URL upon completion
This worked quite well. It gave site builders the ability to fully customize the out of the box experience via contrib modules, custom code, and configuration. It even allowed them to present users with a custom “Installation Done” page if they chose — giving the illusion of a custom install profile.
But it didn’t allow developers to take full control over the install process and screens. It didn’t allow them to organize their code they way they would like. And it didn’t follow the “normal Drupal paradigm” of having an installation profile for each project.New Approach
After much debate, the Lightning team has decided to embrace the concept of “inheriting” profiles. AKA sub-profiles. (/throws confetti)
This is not a new idea and we owe a huge thanks to those that have contributed to the current patch and kept the issue alive for over five years. Nor is it a done deal. It still needs to get committed which, at this point, means Drupal 8.4.x.
On a technical level, this means that — similar to sub-themes — you can place the following in your own installation profile’s *.info.yml file and immediately start building a distribution (or simply a profile) on top of Lightning:base profile: name: lightning
To encourage developers to use this method, we will also be including a DrupalConsole command that interactively helps you construct a sub-profile and a script which will convert your old lightning.extend.yml file to the equivalent sub-profile.
This change will require some rearchitecting of Lightning itself. Mainly to remove the custom extension selection logic we had implements and replace it with standard dependencies.
This is all currently planned for the 2.0.5 release of lightning which is due out in mid March. Stay tuned for updates.
Another day, another Acquia Developer Certification exam review (see the previous one: Certified Back end Specialist - Drupal 8, I recently took the Front End Specialist – Drupal 8 Exam, so I'll post some brief thoughts on the exam below.
Drupal core announcements: 8.3.0 release candidate phase begins week of February 27; no Drupal 8.2.x or 7.x patch release planned
The release candidate phase for the 8.3.0 minor release begins the week of February 27. Starting that week, the 8.3.x branch will be subject to release candidate restrictions, with only critical fixes and certain other limited changes allowed.
8.3.x includes new experimental modules for workflows, layout discovery and field layouts; raises stability of the BigPipe module to stable and the Migrate module to beta; and includes several REST, content moderation, authoring experience, performance, and testing improvements among other things. You can read a detailed list of improvements in the announcements of alpha1 and beta1.
Minor versions may include changes to user interfaces, translatable strings, themes, internal APIs like render arrays and controllers, etc. (See the Drupal 8 backwards compatibility and internal API policy for details.) Developers and site owners should test the release candidate to prepare for these changes.
8.4.x will remain open for new development during the 8.3.x release candidate phase.
Drupal 8.3.0 will be released on April 5th, 2017.No Drupal 8.2.x or 7.x releases planned
March 1 is also a monthly core patch (bug fix) release window for Drupal 8 and 7, but no patch release is planned. This is also the final bug fix release window for 8.2.x (meaning 8.2.x will not receive further development or support aside from its final security release window on March 15). Sites should plan to update to Drupal 8.3.0 on April 5.
I work on a lot of different things. Some are applications, are are libraries, some I started, some other people started, etc. I have way more stuff to do than I could possibly get done, so I try to spend my time on things "that matter".
For Open Source software that doesn't have an established community, this is difficult.
This post is a wandering stream of consciousness covering my journey figuring out who uses Bleach.
Read more… (4 mins to read)
One of the products I have done some work on at Red Hat has recently been released to customers and there have been a few things written about it:
- Getting started with OpenShift Java S2I at Red Hat Developers
- Red Hat Brings Cloud Native Services to Every Java Workload at the OpenShift blog
- Red Hat tweet
जीवन का सत्य, शमशान।
शिव का है स्थान।
काली का तांडव नृत्य।
शिव का करे अभिनन्दन।
Let’s say you got a 64-bit ARM device running Android. For instance, the Tegra X1-based NVIDIA Shield TV. Now, let’s say you are also interested in the latest greatest content from the dev branch, for example to try out some upcoming Vulkan enablers from here and here, and want to see all this running on the big screen with Android TV. How do we get Qt, or at least the basic modules like QtGui, QtQuick, etc. up and running on there?
Our test device.
The Qt documentation and wiki pages document the process fairly well. One thing to note is that a sufficient MinGW toolchain is easily obtainable by installing the official 32-bit MinGW package from Qt 5.8. Visual Studio is not sufficient as of today.
Once MinGW, Perl, git, Java, Ant, the Android SDK, and the 32-bit Android NDK are installed, open a Qt MinGW command prompt and set some environment variables:set PATH=c:\android\tools;c:\android\platform-tools; c:\android\android-ndk-r13b;c:\android\qtbase\bin; C:\Program Files\Java\jdk1.8.0_121\bin; c:\android\ant\bin;%PATH% set ANDROID_API_VERSION=android-24 set ANDROID_SDK_ROOT=c:\android set ANDROID_BUILD_TOOLS_REVISION=25.0.2
Adapt the paths as necessary. Here we assume that the Android SDK is in c:\android, the NDK in android-ndk-r13b, qtbase/dev is checked out to c:\android\qtbase, etc.
The Shield TV has Android 7.0 and the API level is 24. This is great for trying out Vulkan in particular since the level 24 NDK comes with the Vulkan headers, unlike level 23.Build qtbase
Now the fun part: configure. Note that architecture.configure -developer-build -release -platform win32-g++ -xplatform android-g++ -android-arch arm64-v8a -android-ndk c:/android/android-ndk-r13b -android-sdk c:/android -android-ndk-host windows -android-ndk-platform android-24 -android-toolchain-version 4.9 -opensource -confirm-license -nomake tests -nomake examples -v
Once this succeeds, check the output to see if the necessary features (Vulkan in this case) are enabled.
Then build with mingw32-make -j8 or similar.Deploying
To get androiddeployqt, check out the qttools repo, go to src/androiddeployqt and do qmake and mingw32-make. The result is a host (x86) build of the tool in qtbase/bin.
For general information on androiddeployqt usage, check the documentation.
Here we will also rely on Ant. This means that Ant must either be in the PATH, as shown above, or the location must be provided to androiddeployqt via the –ant parameter.
Now, Qt 5.8.0 and earlier have a small issue with AArch64 Android deployments. Therefore, grab the patch from Gerrit and apply on top of your qtbase tree if it is not there already. (it may or may not have made its way to the dev branch via merges yet)
After this one can simply go to a Qt application, for instance qtbase/examples/opengl/qopenglwidget and do:qmake mingw32-make install INSTALL_ROOT=bld androiddeployqt --output bld adb install -r bld/bin/QtApp-debug.apk Launching
Now that a Qt application is installed, let’s launch it.
Except that it does not show up in the Android TV launcher.
One easy workaround could be to adb shell and do something like the following:am start -n org.qtproject.example.qopenglwidget/org.qtproject.qt5.android.bindings.QtActivity
Then again, it would be nice to get something like this:
Therefore, let’s edit bld/AndroidManifest.xml:<intent-filter> <action android:name="android.intent.action.MAIN"/> <!--<category android:name="android.intent.category.LAUNCHER"/>--> <category android:name="android.intent.category.LEANBACK_LAUNCHER" /> </intent-filter>
and reinstall by running ant debug install. Changing the category name does the trick.
Note that rerunning androiddeployqt overwrites the manifest file. A more reusable alternative would be to make a copy of the template, change it, and use ANDROID_PACKAGE_SOURCE_DIR.The result
Widget applications, including OpenGL, run fairly well:
Or something more exciting:
No, really. That clear to green is actually done via Vulkan.
And finally, the hellovulkantexture example using QVulkanWindow! (yeah, colors are a bit bad on these photos)
adb logcat is your friend, as usual. Let’s get some proof that our textured quad is indeed drawn via Vulkan:qt.vulkan: Vulkan init (libvulkan.so) vulkan : searching for layers in '/data/app/org.qtproject.example.hellovulkantexture-2/lib/arm64' ... qt.vulkan: Supported Vulkan instance layers: QVector() qt.vulkan: Supported Vulkan instance extensions: QVector(QVulkanExtension("VK_KHR_surface" 25), QVulkanExtension("VK_KHR_android_surface" 6), QVulkanExtension("VK_EXT_debug_report" 2)) qt.vulkan: Enabling Vulkan instance layers: () qt.vulkan: Enabling Vulkan instance extensions: ("VK_EXT_debug_report", "VK_KHR_surface", "VK_KHR_android_surface") qt.vulkan: QVulkanWindow init qt.vulkan: 1 physical devices qt.vulkan: Physical device : name 'NVIDIA Tegra X1' version 361.0.0 qt.vulkan: Using physical device  qt.vulkan: queue family 0: flags=0xf count=16 qt.vulkan: Supported device layers: QVector() qt.vulkan: Enabling device layers: QVector() qt.vulkan: Supported device extensions: QVector(QVulkanExtension("VK_KHR_swapchain" 68), QVulkanExtension("VK_KHR_sampler_mirror_clamp_to_edge" 1), QVulkanExtension("VK_NV_dedicated_allocation" 1), QVulkanExtension("VK_NV_glsl_shader" 1)) qt.vulkan: Enabling device extensions: QVector(VK_KHR_swapchain) qt.vulkan: memtype 0: flags=0x1 qt.vulkan: memtype 1: flags=0x1 qt.vulkan: memtype 2: flags=0x7 qt.vulkan: memtype 3: flags=0xb qt.vulkan: Picked memtype 2 for host visible memory qt.vulkan: Picked memtype 0 for device local memory initResources uniform buffer offset alignment is 256 qt.vulkan: Creating new swap chain of 2 buffers, size 1920x1080 qt.vulkan: Actual swap chain buffer count: 2 qt.vulkan: Allocating 8847360 bytes for depth-stencil initSwapChainResources ...
Should you need validation layers, follow the instructions from the Android Vulkan docs and rebuild and redeploy the package after copying the libVkLayer* to the right location.
That’s all for now. Have fun experimenting. The basic Vulkan enablers, including QVulkanWindow are currently scheduled for Qt 5.10, with support for Windows, Linux/X11, and Android. (the list may grow later on)
The post Building the latest greatest for Android AArch64 (with Vulkan teaser) appeared first on Qt Blog.
Some of the tools you might use and the config files they support...
- flake8 - .flake8, setup.cfg, tox.ini, and config/flake8 on Windows
- pytest - pytest.ini, tox.ini, setup.cfg
- coverage.py - .coveragerc, setup.cfg, tox.ini
- mypy - setup.cfg, mypy.ini
- tox - tox.ini
OK, you've convinced me. -- Guido
With that mypy now also supports setup.cfg, and we can all remove many more config files.
The rules for precedence are easy:
- read --config-file option - if it's incorrect, exit
- read [tool].ini - if correct, stop
- read setup.cfg
What does a setup.cfg look like now?Here's an example setup.cfg for you with various tools configured. (note these are nonsensical example configs, not what I suggest you use!)
#timid = True
# addopts=-v --cov pygameweb pygameweb/ tests/
#python_version = 2.7
#max-line-length = 120
#max-complexity = 10
#exclude = build,dist,docs/conf.py,somepackage/migrations,*.egg-info
## Run with: pylint --rcfile=setup.cfg somepackage
#disable = C0103,C0111
#ignore = migrations
#ignore-docstrings = yes
#output-format = colorized
In this blog I want to explain the round up we have done around the refactoring of the acl_contact_cache. In the previous sprints we discovered that a lot of the performance was slowed down by the way the acl_contact_cache was used (or rather not used at all). See also the previous blog post: https://civicrm.org/blog/jaapjansma/the-quest-for-performance-improvements-5th-sprint
At the socialist party they have 350.000 contacts and around 300 users who can access civicrm. Most of the users are only allowed to see only the members in their local chapter.
In the previous blog we explained the proof of concept. We now have implemented this proof of concept and the average performance increase was 60%.
We created a table which holds which user has access to which contacts. We then fill this table once in a few hours. See also issue CRM-19934 for the technical implementation of this proof of concept.Performance increase in the search query
In the next examples we are logged in as a local member who can only see members in the chapter Amersfoort. We then search for persons with the name 'Jan'. And we measure how long the query for searching takes.
The query for presenting the list with letters in the search result looks likeSELECT count(DISTINCT contact_a.id) as rowCount FROM civicrm_contact contact_a LEFT JOIN civicrm_value_geostelsel geostelsel ON contact_a.id = geostelsel.entity_id LEFT JOIN civicrm_membership membership_access ON contact_a.id = membership_access.contact_id WHERE ((((contact_a.sort_name LIKE '%jan%')))) AND (contact_a.id = 803832 OR (((( ( geostelsel.`afdeling` = 806816 OR geostelsel.`regio` = 806816 OR geostelsel.`provincie` = 806816 ) AND ( membership_access.membership_type_id IN (1, 2, 3) AND ( membership_access.status_id IN (1, 2, 3) OR (membership_access.status_id = '7' AND (membership_access.end_date >= NOW() - INTERVAL 3 MONTH)) ) ) ) OR contact_a.id = 806816 )) AND (contact_a.is_deleted = 0) )) ORDER BY UPPER(LEFT(contact_a.sort_name, 1)) asc;
As you can see that is quite a complicated query and includes details about which members the user is allowed to see. Only executing this query takes around 0.435 seconds and the reason is that mysql has to check each record in civicrm_contact (which in this case is around 350.000 and growing)
After refactoring the acl cache functionality in CiviCRM Core the query looks different:SELECT DISTINCT UPPER(LEFT(contact_a.sort_name, 1)) as sort_name FROM civicrm_contact contact_a INNER JOIN `civicrm_acl_contacts` `civicrm_acl_contacts` ON `civicrm_acl_contacts`.`contact_id` = `contact_a`.`id` WHERE (((( contact_a.sort_name LIKE '%jan%' )))) AND `civicrm_acl_contacts`.`operation_type` = '2' AND `civicrm_acl_contacts`.`user_id` = '803832' AND `civicrm_acl_contacts`.`domain_id` = '1' AND (contact_a.is_deleted = 0) ORDER BY UPPER(LEFT(contact_a.sort_name, 1)) asc
The query now takes around 0,022 seconds to run (20 times faster).Explanation
How does this new functionality works:
1. Every time an ACL restriction is needed in a query civicrm core only does an inner join on the civicrm_acl_contacts table and that is all
2. The inner join is generated in the service 'acl_contact_cache' that service also checks whether the the civicrm_acl_contacts table need to be updated or not.
3. When an update of civicrm_acl_contacts table is needed depends on the settings under administer --> System Settings --> Misc --> ACL Contact Cache Validity (in minutes)
So how does this look like in code?
Below an example of how you could use the acl_contact_cache service to inject acl logic into your query:// First get the service from the Civi Container $aclContactCache = \Civi::service('acl_contact_cache'); // The $aclContactCache is a class based on \Civi\ACL\ContactCacheInterface // Now get the aclWhere and aclFrom part for our query $aclWhere = $aclContactCache->getAclWhereClause(CRM_Core_Permission::VIEW, 'contact_a'); $aclFrom = $aclContactCache->getAclJoin(CRM_Core_Permission::VIEW, 'contact_a'); // Now build our query $sql = "SELECT contact_a.* FROM civicrm_contact contact_a ".$aclFrom." WHERE 1 AND ".$aclWhere; // That is it now execute our query and handle the output...
The reason we use a service in the Civi Container class is that it is now also quite easy to override this part of core in your own extension.
The \Civi\ACL\ContactCache class has all the logic to for building the ACL queries. Meaning that this class contains the logic to interact with the ACL settings in CiviCRM, with the permissioned relationship etc.. All those settings are taken into account when filling civicrm_acl_contacts table which is per user and per operation once in the three hours.
After much preparation, the tickets for foss-north 2017 is available at foss-north.se – grab them while they are hot!
The call for papers is still open (do you want to talk – register!) so we do not have a final schedule, but you will find our confirmed speakers on the web site as we grow the list. Right now, we know that have the pleasure to introduce:
- Lydia Pintscher, the product manager of Wikidata, Wikimedia’s knowledge base, as well as the president of KDE e.V.
- Lennart Poettering, from Red Hat known for systemd, PulseAudio, Avahi and more.
- Jos Poortvliet, with a background from SUSE and KDE, he now heads marketing at Nextcloud.
The conference covering both software and hardware from the technical perspective. The event is held on April 26 in central Gothenburg located between Copenhagen, Oslo and Stockholm with an international airport.
This is a great excuse to visit a really nice part of Sweden while attending a nice conference – welcome!
The world moved on to https but the Tcl http package only supports unencrypted http. You can combine it with the tls package as explained in the Wiki, but that seems to be overly complicated compared to just loading the TclCurl binding and moving on with something like this:package require TclCurl # download to a variable curl::transfer -url https://sven.stormbind.net -bodyvar page # or store it in a file curl::transfer -url https://sven.stormbind.net -file page.html
Now the remaining problem is that the code is unmaintained upstream and there is one codebase on bitbucket and one on github. While I fed patches to the bitbucket repo and thus based the Debian package on that repo, the github repo diverted in a different direction.