I just came back from two weeks in Heidelberg for DebCamp15 and DebConf15.
In the first week, besides helping out DebConf's infrastructure team with network setup, I tried to make some progress on the library transitions triggered by libstdc++6's C++11 changes. At first, I spent many hours going through header files for a bunch of libraries trying to figure out if the public API involved std::string or std::list. It turns out that is time-consuming, error-prone, and pretty efficient at making me lose the will to live. So I ended up stealing a script from Steve Langasek to automatically rename library packages for this transition. This ended in 29 non-maintainer uploads to the NEW queue, quickly processed by the FTP team. Sadly the transition is not quite there yet, as making progress with the initial set of packages reveals more libraries that need renaming.
Building on some earlier work from Laurent Bigonville, I've also moved the setuid root Xorg wrapper from the xserver-xorg package to xserver-xorg-legacy, which is now in experimental. Hopefully that will make its way to sid and stretch soon (need to figure out what to do with non-KMS drivers first).
Finally, with the help of the security team, the security tracker was moved to a new VM that will hopefully not eat its root filesystem every week as the old one was doing the last few months. Of course, the evening we chose to do this was the night DebConf15's network was being overhauled, which made things more interesting.
DebConf itself was the opportunity to meet a lot of people. I was particularly happy to meet Andreas Boll, who has been a member of pkg-xorg for two years now, working on our mesa package, among other things. I didn't get to see a lot of talks (too many other things going on), but did enjoy Enrico's stand up comedy, the CitizenFour screening, and Jake Applebaum's keynote. Thankfully, for the rest the video team has done a great job as usual.Note
Above picture is by Aigars Mahinovs, licensed under CC-BY 2.0
I've spent a fair amount of time thinking about how to win back the Open Web, but in the case of digital distributors (e.g. closed aggregators like Facebook, Google, Apple, Amazon, Flipboard) superior, push-based user experiences have won the hearts and minds of end users, and enabled them to attract and retain audience in ways that individual publishers on the Open Web currently can't.
In today's world, there is a clear role for both digital distributors and Open Web publishers. Each needs the other to thrive. The Open Web provides distributors content to aggregate, curate and deliver to its users, and distributors provide the Open Web reach in return. The user benefits from this symbiosis, because it's easier to discover relevant content.
As I see it, there are two important observations. First, digital distributors have out-innovated the Open Web in terms of conveniently delivering relevant content; the usability gap between these closed distributors and the Open Web is wide, and won't be overcome without a new disruptive technology. Second, the digital distributors haven't provided the pure profit motives for individual publishers to divest their websites and fully embrace distributors.
However, it begs some interesting questions for the future of the web. What does the rise of digital distributors mean for the Open Web? If distributors become successful in enabling publishers to monetize their content, is there a point at which distributors create enough value for publishers to stop having their own websites? If distributors are capturing market share because of a superior user experience, is there a future technology that could disrupt them? And the ultimate question: who will win, digital distributors or the Open Web?
I see three distinct scenarios that could play out over the next few years, which I'll explore in this post.
This image summarizes different scenarios for the future of the web. Each scenario has a label in the top-left corner which I'll refer to in this blog post. A larger version of this image can be found at http://buytaert.net/sites/buytaert.net/files/images/blog/digital-distrib....Scenario 1: Digital distributors provide commercial value to publishers (A1 → A3/B3)
Digital distributors provide publishers reach, but without tangible commercial benefits, they risk being perceived as diluting or even destroying value for publishers rather than adding it. Right now, digital distributors are in early, experimental phases of enabling publishers to monetize their content. Facebook's Instant Articles currently lets publishers retain 100 percent of revenue from the ad inventory they sell. Flipboard, in efforts to stave off rivals like Apple News, has experimented with everything from publisher paywalls to native advertising as revenue models. Except much more experimentation with different monetization models and dealmaking between the publishers and digital distributors.
If digital distributors like Facebook succeed in delivering substantial commercial value to the publisher they may fully embrace the distributor model and even divest their own websites' front-end, especially if the publishers could make the vast majority of their revenue from Facebook rather than from their own websites. I'd be interested to see someone model out a business case for that tipping point. I can imagine a future upstart media company either divesting its website completely or starting from scratch to serve content directly to distributors (and being profitable in the process). This would be unfortunate news for the Open Web and would mean that content management systems need to focus primarily on multi-channel publishing, and less on their own presentation layer.
As we have seen from other industries, decoupling production from consumption in the supply-chain can redefine industries. We also know that introduces major risks as it puts a lot of power and control in the hands of a few.Scenario 2: The Open Web's disruptive innovation happens (A1 → C1/C2)
For the Open Web to win, the next disruptive innovation must focus on narrowing the usability gap with distributors. I've written about a concept called a Personal Information Broker (PIM) in a past post, which could serve as a way to responsibly use customer data to engineer similar personal, contextually relevant experiences on the Open Web. Think of this as unbundling Facebook where you separate the personal information management system from their content aggregation and curation platform, and make that available for everyone on the web to use. First, it would help us to close the user experience gap because you could broker your personal information with every website you visit, and every website could instantly provide you a contextual experience regardless of prior knowledge about you. Second, it would enable the creation of more distributors. I like the idea of a PIM making the era of handful of closed distributors as short as possible. In fact, it's hard to imagine the future of the web without some sort of PIM. In a future post, I'll explore in more detail why the web needs a PIM, and what it may look like.Scenario 3: Coexistence (A1 → A2/B1/B2)
Finally, in a third combined scenario, neither publishers nor distributors dominate, and both continue to coexist. The Open Web serves as both a content hub for distributors, and successfully uses contextualization to improve the user experience on individual websites.Conclusion
Right now, since distributors are out-innovating on relevance and discovery, publishers are somewhat at their mercy for traffic. However, a significant enough profit motive to divest websites completely remains to be seen. I can imagine that we'll continue in a coexistence phase for some time, since it's unreasonable to expect either the Open Web or digital distributors to fail. If we work on the next disruptive technology for the Open Web, it's possible that we can shift the pendulum in favor of “open” and narrow the usability gap that exists today. If I were to guess, I'd say that we'll see a move from A1 to B2 in the next 5 years, followed by a move from B2 to C2 over the next 5 to 10 years. Time will tell!
Visit our site to listen to past episodes, comment on the show or find out more about us.Summary
In this episode we had the opportunity to discuss the world of static site generators with Roberto Alsina of the Nikola project and Justin Mayer of the Pelican project. They explained what static site generators are and why you might want to use one. We asked about why you should choose a Python based static site generator, theming and markup support as well as metadata formats and documentation. We also debated what makes Pelican and Nikola so popular compared to other projects.Brief Introduction
- Welcome to Podcast.__init__ the podcast about Python and the people who make it great
- Follow us on iTunes, Stitcher or TuneIn
- Give us feedback on iTunes, Twitter, email or Disqus
- We donate our time to you because we love Python and its community. If you would like to return the favor you can send us a donation}. Everything that we don’t spend on producing the show will be donated to the PSF to keep the community alive.
- Date of recording - August 08, 2015
- Hosts Tobias Macey and Chris Patti
- Today we are interviewing the core developers of Nikola and Pelican about static site generators
- Monitorial.net <- Justin
- Upriise <- Justin
- Works for Canonical <- Roberto
- How did you get introduced to Python?
- Needed a way to get order data to payment processor for commerce company
- 1996 got involved with Linux
- Found XForms
- Wrote Python bindings
- For our listeners who might not know, what are static site generators and what are some of the advantages they bring to the table over other similar systems that perform the same function?
- Remove all the effort from the computer that serves the website
- Server runs no code
- Smaller ssurface area for security purposes
- Better performance - important for responsiveness and uptime
- Easier deployment and maintenance
- Easier versioning and migration
- Can version both input and output
- ReStructured TeXT is best supported in Python
- Good language for supporting various markup syntaxes
- Most static site generators seem to have a primary focus on blogging. What is it about these tools that lend themselves so well to that use case?
- The author of the tools shape the purpose of the tool
- Most popular among programmers which is a demographic that is likely to have a blog
- Workflow is similar to what programmers are used to
- Still useful for non-chronological pages due to templating system
- Something that struck me comparing the two systems is that they have largely the same kinds of data going into the metadata block for each post, but it’s expressed in a different / incompatible way in each. Have you ever considered agreeing on a standard and even advertising it as such so all static site generators could make use of it?
- Challenging because of the idiosyncratic way problems are solved in each system
- Wouldn’t end up with the same site even if metadata were identical
- Roberto & Justin are talking, this may happen!
- The themes in Pelican and Nikola have very different feels and one of the things that initially drew me to Pelican is the larger catalog of themes available. What are some of the challenges involved in creating a theme for a static site generator?
- Many programmers who write SSGs aren’t amazing at HTML
- Pelican and Nikola seem to be the most widely used projects for creating static sites using Python. What do you think is the key to that popularity?
- Frequent updates, good documentation and large community
- Easy to get up and running
- Need to be productive inside of 2 minutes
- Good first impressions are key
- Importance of extensibility
- Core modularity and availability of plugins
- A lot of people have written about the importance (and difficulty) of writing and maintaining good documentation in open source projects. Nikola’s documentation is excellent. How did Nikola manage this in its development process and what can other open source projects learn from this?
- No secrets - just do it and keep it updated.
- Need to look at the tool as if using it for the first time
- What are some specific examples of unique and interesting uses your site generators have been put to?
- kernel.org, Debian, Chicago Linux Users, TransFX (translation house) all use Pelican
- Embedding Jupyter notebooks and MathML rendering in posts
- Site search plugin
- Big adoption in the sciences (Jupyter notebook embedding supported in core)
- Output is forever
- Plugin to trigger internet archive to reindex site
- Nikola’s flexible deployment architecture (e.g. the use of doit tasks) seems to lend itself to some interesting use cases. What was the inspiration for this?
- Build was taking 1 1/2 hours, doit allowed for incremental generation
- Doit is a generic task system. Nikola has no “main” it’s a collection of doit tasks.
- Is there any specific help that you would like to ask of the audience?
- Contribute themes
- Help with reviewing issues and pull requests
Leaving this here for the next time I have to work with CSV files on the command line. It can do a lot more than just CSVs of course, but that’s likely to be the most useful to me.
At Annertech, there are three things we take very seriously: website/server security, accessibility, and website load times/performance. This article will look at website performance with metrics from recent work we completed for Oxfam Ireland.
We use a suite of tools for performance testing. Some of these include Apache Benchmark, Yahoo's YSlow, and Google's PageSpeed Insights. Our favourite at the moment is NewRelic, though this does come at a cost.
That what I want to know, too.
IntelliSense? ActiveState Komodo does this. And it does it very well considering the potential complexity of trying to determine what identifiers are possibly valid in a dynamic language.
Debugger? No thanks. I haven't used it yet. [I should probably blog on the perils of debuggers.]
Project Management? GitHub seems to be it. Some IDE integration might be helpful, but the three common command-line operations -- git pull, git commit, and git push -- seem to cover an awful lot of bases.
I've been asked about Python IDEs -- more than once -- and my answer remains the same:
The IDE Doesn't Matter.
One of the more shocking tech decisions I've seen is the development manager who bragged on the benefits of VB. The entire benefit was this: Visual Studio made the otherwise awful VB language acceptable.
The Visual Studio IDE was great. And it made up for the awful language.
The development manager went to to claim that until Eclipse had all the features of Visual Studio, they were sure that Java was not usable. To them, the IDE was the only decision criteria. As though code somehow doesn't have a long tail of support, analysis, and reverse engineering.
Even Free software needs to be funded. Apart from being very collectible, money is really useful: it can buy transportation so contributors can meet, accommodation so they can sleep, time so they can code, write documentation, create icons and other graphics, hardware to test and develop the software on.
With that in mind, KDE is running a fund raiser to fund developer sprints, Synfig is running a fund raiser to fund a full-time developer and Krita… We’re actually trying to make funded development sustainable. Blender is already doing that, of course.
Funding development is a delicate balancing act, though. When we started doing sponsorship for full-time development on Krita, there were some people concerned that paying some community members for development would disenchant others, the ones who didn’t get any of the money. Even Google Summer of Code already raised that question. And there are examples of companies hiring away all community members, killing the project in the process.
Right now, our experience shows that it hasn’t been a problem. That’s partly because we have always been very clear about why we were doing the funding: Lukas had the choice between working on Krita and doing some boring web development work, and his goal was fixing bugs and performance issues, things nobody had time for, back then. Dmitry was going to leave university and needed a job, and we definitely didn’t want to lose him for the project.
In the end, people need food, and every line of code that’s written for Krita is one line more. And those lines translate to increased development speed, which leads to a more interesting project, which leads to more contributors. It’s a virtuous circle. And there’s still so much we can do to make Krita better!
So, what are we currently doing to fund Krita development, and what are our goals, and what would be the associated budget?
Right now, we are:
- Selling merchandise: this doesn’t work. We’ve tried dedicated webshops, selling tote bags and mugs and things, but the total sales is under a hundred euros, which makes it not worth the hassle.
- Selling training DVD’s: Ramon Miranda’s Muses DVD is still a big success. Physical copies and downloads are priced the same. There’ll be a new DVD, called “Secrets of Krita”, by Timothée Giet this year, and this week, we’ll start selling USB sticks (credit-card shaped) with the training DVD’s and a portable version of Krita for Windows and OSX and maybe even Linux.
- The Krita Development Fund. It comes in two flavors. For big fans of Krita, there’s the development fund for individual users. You decide how much a month you can spare for Krita, and set up an automatic payment profile with Paypal or a direct bank transfer. The business development fund has a minimum amount of 50 euros/month and gives access to the CentOS builds we make.
- Individual donations. This depends a lot on how much we do publicity-wise, and there are really big donations now and then which makes it hard to figure out what to count on, from month to month, but the amounts are significant. Every individual donor gets a hand-written email saying thank-you.
- We are also selling Krita on Steam. We’ve got a problem here: the Gemini variant of Krita, with the switchable tablet/desktop GUI, got broken with the 2.9 release. But Steam users also get regular new builds of the 2.9 desktop version. Stuart is helping us here, but we need to work harder to interact with our community on Steam!
- And we do one or two big crowd-funding campaigns. Our yearly kickstarters. They take about two full-time months to prepare, and you can’t skimp on preparation because then you’ll lose out in the end, and they take significant work to fulfil all the rewards. Reward fulfilment is actually something we pay someone a volunteer gratification for to do the work. We are considering doing a second kickstarter this year, to give me an income, with as goal producing a finished, polished OSX port of Krita. The 2015 kickstarter campaign brought in 27,471.78 euros, but we still need to buy and send out the rewards, which are estimated at an approximate cost of 5,000 euros.
- Patreon. I’ve started a patreon, but I’m not sure what to offer prospective patrons, so it isn’t up and running yet.
- Bug bounties. The problem here is that the amount of money people think is reasonable for fixing a bug is wildly unrealistic, even for a project that is as cheap to develop as Krita. You have to count on 250 euros for a day of work, to be realistic. I’ve sent out a couple of quotations, but… If you realize that adding support for loading group layers from XCF files is already taking three days, most people simply cannot bear the price of a bug fix individually.
So, let’s do sums for the first 8 months of 2015:Paypal (merchandise, training materials, development fund, kickstarter-through-paypal and smaller individual donations) 8,902.04 The big individual donations usually arrive directly at our bank account, including a one-time donation to sponsor the port of Krita to Qt5 15,589.00 Steam 5,150.97 Kickstarter 27,471.78 Total 57,113.79
So, the Krita Foundation’s current yearly budget is roughly 65,000 euros, which is enough to employ Dmitry full-time and me part-time. The first goal really is to make sure I can work on Krita full-time again. Since KO broke down, that’s been hard, and I’ve spent five months on the really exciting Plasma Phone project for Blue Systems. That was a wonderful experience, but it had a direct influence on the speed of Krita development, both code-wise, as well as in terms of growing the userbase and keeping people involved.
What we also have tried is approaching VFX and game studios, selling support and custom development. This isn’t a big success yet, and that’s puzzling me some. All these studios are on Linux. All their software, except for their 2D painting application, is on Linux. They want to use Krita, on Linux. And every time we are in contact with some studio, they tell us they want Krita. Except, there’s some feature missing, something that needs improved… And we make a very modest quote, one that doesn’t come near what custom development should cost, and silence is the result.
Developing Krita is actually really cheap. We don’t have any overhead: no management, no office, modest hardware needs. With 5,000 euros we can fund one full-time developer for one month, with something to spare for hardware, sprints and other costs, like the license for the administration software, stamps and envelopes. The first goal would be to double our budget, so we can have two full-time developers, but in the end, I would like to be able to fund four to five full-time developers, including me, and that means we’re looking at a year budget of roughly 300,000 euros. With that budget, we’d surpass every existing 2D painting application, and it’s about what Adobe or Corel would need to budget for one developer per year!
Taking it from here, what are the next steps? I still think that without direct involvement of people and organizations who want to use Krita in a commercial, professional setting, we cannot reach the target budget. I’m too much a tech geek — there’s a reason KO failed, and that is that we were horrible at sales — to figure out how to reach out and convince people that supporting Krita would be a winning proposition! Answers on a post-card, please!
Hi there folks. I have a good news for you. I have got two coupons for a video course on Udemy. The name of the course is “Learn Python GUI programming using Qt framework” and costs $79. It is taught by Bogdan Milanovich. I have taken this course previously when I was just getting started with GUI development in Python. At the time when I took this course it was incomplete but now it appears as if the instructor has completed the course.
The contents of the course can be checked from here.
The coupons are valid for only 100 people in total. I don’t want to waste them and want to give them only to those who are really interested in this course. The course costs $79 in total but with the coupon I have, you can get it for free.
There are a couple of ways you can ask me for the coupon. The first one is to email me. Secondly, you can send me a message through my Facebook page and lastly you can comment on this post and I would get back to you.
Till next time! Goodbye! :)
During Jacob Applebaum's talk at DebConf15, he noted that Debian should TLS-enable all services, especially the mirrors.
His reasoning was that when a high-value target downloads a security update for package foo, an adversary knows that they are still using a vulnerable version of foo and try to attack before the security update has been installed.
In this specific case, TLS is not of much use though. If the target downloads 4.7 MiB right after a security update with 4.7 MiB has been released, or downloads from security.debian.org, it's still obvious what's happening. Even padding won't help much as the 5 MiB download will also be suspicious. The mere act of downloading anything from the mirrors after an update has been released is reason enough to try an attack.
The solution, is, of course, Tor.
weasel was nice enough to set up a hidden service on Debian's infrastructure; initally we agreed that he would just give me a VM and I would do the actual work, but he went the full way on his own. Thanks :) This service is not redundant, it uses a key which is stored on the local drive, the .onion will change, and things are expected to break.
But at least this service exists now and can be used, tested, and put under some load:http://vwakviie2ienjx6t.onion/
I couldn't get apt-get to be content with a .onion in /etc/apt/sources.list and Acquire::socks::proxy "socks://127.0.0.1:9050"; in /etc/apt/apt.conf, but the torify wrapper worked like a charm. What follows is, to the best of my knowledge, the first ever download from Debian's "official" Tor-enabled mirror:~ # apt-get install torsocks ~ # mv /etc/apt/sources.list /etc/apt/sources.list.backup ~ # echo 'deb http://vwakviie2ienjx6t.onion/debian/ unstable main non-free contrib' > /etc/apt/sources.list ~ # torify apt-get update Get:1 http://vwakviie2ienjx6t.onion unstable InRelease [215 kB] Get:2 http://vwakviie2ienjx6t.onion unstable/main amd64 Packages [7548 kB] Get:3 http://vwakviie2ienjx6t.onion unstable/non-free amd64 Packages [91.9 kB] Get:4 http://vwakviie2ienjx6t.onion unstable/contrib amd64 Packages [58.5 kB] Get:5 http://vwakviie2ienjx6t.onion unstable/main i386 Packages [7541 kB] Get:6 http://vwakviie2ienjx6t.onion unstable/non-free i386 Packages [85.4 kB] Get:7 http://vwakviie2ienjx6t.onion unstable/contrib i386 Packages [58.1 kB] Get:8 http://vwakviie2ienjx6t.onion unstable/contrib Translation-en [45.7 kB] Get:9 http://vwakviie2ienjx6t.onion unstable/main Translation-en [5060 kB] Get:10 http://vwakviie2ienjx6t.onion unstable/non-free Translation-en [80.8 kB] Fetched 20.8 MB in 2min 0s (172 kB/s) Reading package lists... Done ~ # torify apt-get install vim Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: vim-common vim-nox vim-runtime vim-tiny Suggested packages: ctags vim-doc vim-scripts cscope indent The following packages will be upgraded: vim vim-common vim-nox vim-runtime vim-tiny 5 upgraded, 0 newly installed, 0 to remove and 661 not upgraded. Need to get 0 B/7719 kB of archives. After this operation, 2048 B disk space will be freed. Do you want to continue? [Y/n] Retrieving bug reports... Done Parsing Found/Fixed information... Done Reading changelogs... Done (Reading database ... 316427 files and directories currently installed.) Preparing to unpack .../vim-nox_2%3a7.4.826-1_amd64.deb ... Unpacking vim-nox (2:7.4.826-1) over (2:7.4.712-3) ... Preparing to unpack .../vim_2%3a7.4.826-1_amd64.deb ... Unpacking vim (2:7.4.826-1) over (2:7.4.712-3) ... Preparing to unpack .../vim-tiny_2%3a7.4.826-1_amd64.deb ... Unpacking vim-tiny (2:7.4.826-1) over (2:7.4.712-3) ... Preparing to unpack .../vim-runtime_2%3a7.4.826-1_all.deb ... Unpacking vim-runtime (2:7.4.826-1) over (2:7.4.712-3) ... Preparing to unpack .../vim-common_2%3a7.4.826-1_amd64.deb ... Unpacking vim-common (2:7.4.826-1) over (2:7.4.712-3) ... Processing triggers for man-db (184.108.40.206-5) ... Processing triggers for mime-support (3.58) ... Processing triggers for desktop-file-utils (0.22-1) ... Processing triggers for hicolor-icon-theme (0.13-1) ... Setting up vim-common (2:7.4.826-1) ... Setting up vim-runtime (2:7.4.826-1) ... Processing /usr/share/vim/addons/doc Setting up vim-nox (2:7.4.826-1) ... Setting up vim (2:7.4.826-1) ... Setting up vim-tiny (2:7.4.826-1) ... ~ #
More services will follow. noodles, weasel, and me agreed that the project as a whole should aim to Tor-enable the complete package lifecycle, package information, and the website.
Maybe a more secure install option on the official images which, amongst others, sets up apt, apt-listbugs, dput, reportbug, et al up to use Tor without further configuration could even be a realistic stretch goal.
The first releases of the Fabric8 v2 have been using a JAX-RS based Kubernetes client that was using Apache CXF. The client was great, but we always wanted to provide something thinner, with less dependencies (so that its easier to adopt). We also wanted to give it a fecelift and build a DSL around it so that it becomes easier to use and read.
The new client currently lives at: https://github.com/fabric8io/kubernetes-client and it provides the following modules:
Let's have a quick look on how you can create, list and delete things using the client:
The snippet above is pretty much self explanatory (and that's the beauty of using a DSL), but I still have a blog post to fill, so I'll provide as many details as possible.
The client domain model
You could think of the client as a union of two things:
- The Kubernetes domain model.
- The DSL around the model.
We needed to have a way of manipulating these JSON objects in Java (and being able to take advantage of code completion etc) but also stay as close as possible to the original format. Using a POJO representation of the JSON objects can be used for manipulation, but it doesn't quite feel like JSON and is also not really usable for JSON with deep nesting. So instead, we decided to generate fluent builders on top of those POJOs that used the exact same structure with the original JSON.
For example, here the JSON object of a Kubernetes Service:
The Java equivalent using Fluent Builders could be:
The domain model lives on its own project: Fabric8's Kubernetes Model. The model is generated from Kubernetes and Openshift code after a long process:
- Go source conversion JSON schema
- JSON schema conversion POJO
- Generation of Fluent Builders
Getting an instance of the client
Getting an instance of the default client instance is pretty trivial since an empty constructor is provided. When the empty constructor is used the client will use the default settings which are:
- Kubernetes URL
- System property "kubernetes.master"
- Environment variable "KUBERNETES_MASTER"
- From ".kube/config" file inside user home.
- Using DNS: "https://kubernetes.default.svc"
More fine grained configuration can be provided by passing an instance of the Config object.
Client extensions and adapters
To support Kubernetes extensions (e.g Openshift) the client uses the notion of the Extension and the Adapter. The idea is pretty simple. An extension client extends the default client and implements the Extension. Each client instance can be adapted to the Extension as long as an Adapter can be found via Java's ServiceLoader (forgive me father).
Here's an example of how to adapt any instance of the client to an instance of the OpenshiftClient:
The code above will work only if /oapi exists in the list of root paths returned by the Kubernetes Client (i.e. the client points to an open shift installation). If not it will throw an IllegalArugementException.
In case the user is writing code that is bound to Openshift he can always directly instantiate an Instance of the default openshift client.
Testing and Mocking
Mocking a client that is talking to an external system is a pretty common case. When the client is flat (doesn't support method chaining) mocking is trivial and there are tons of frameworks out there that can be used for the job. When using a DSL though, things get more complex and require a lot of boilerplate code to wire the pieces together. If the reason is not obvious, let's just say that with mocks you define the behaviour of the mock per method invocation. DSLs tend to have way more methods (with fewer arguments) compared to the equivalent Flat objects. That alone increases the work needed to define the behaviour. Moreover, those methods are chained together by returning intermediate objects, which means that they need to be mocked too, which further increases both the workload and the complexity.
To remove all the boilerplate and make mocking the client pretty trivial to use we combined the DSL of the client, with the DSL of a mocking framework: EasyMock. This means that the entry point to this DSL is the Kubernetes client DSL itself, but the terminal methods have been modified so that they return "Expectation Setters". An example should make this easier to comprehend.
The mocking framework can be easily combined with other Fabric8 components, like the CDI extension. You just have to create @Produces method that returns the mock.
I have just released a wastly improved new version of the Kobo Japanese Dictionary Enhancer. It allows you to enhance the Kobo Japanese dictionary with English translations.
The new version provides now 326064 translated entries, which covers most non-compound words, including Hiragana. In my daily life reading Harry Potter and some other books in Japanese, I haven’t found many untranslated words by now.
Please head over to the main page of the project for details and download instructions. If you need my help in creating the updated dictionary, please feel free to contact me.
I therefore spent some time to finish a couple of features in the editor for sources.debian.net. Here are some of the changes:
- Compare the source file with that of another version of the package
- And in order to present that: tabs! editor tabs!
- at the same time: generated diffs are now presented in a new editor tab, from where you can download it or email it
Get it for chromium, and iceweasel.
If your browser performs automatic updates of the extensions (the default), you should soon be upgraded to version 0.1.0 or later, bringing all those changes to your browser.
Want to see more? multi-file editing? in-browser storage of the editing session? that and more can be done, so feel free to join me and contribute to the Debian sources online editor!
Moodle is a free and open-source software learning management system written in PHP and distributed under the GNU General Public License. Moodle is used for blended learning, distance education, flipped classroom and other e-learning projects in schools, universities, workplaces and other sectors.
Our main objective is that we wanted to manage all the users from Drupal i.e., use drupal as the front end for managing users. For this purpose, we have a moodle plugin and drupal module. Drupal services is a moodle authorization plugin that allows for SSO between Drupal and Moodle. Moodle SSO provides the Drupal functionality required to allow the Moodle training management system to SSO share Drupal sessions.
In order to make SSO work, we need to ensure that sites can share cookies. Drupal and moodle sites should have url like drupal.example.com and moodle.example.com. As mentioned earlier, sites should be able to share cookies. To make sites use shared cookie, we need set the value of $cookie_domain in settings.php file on the drupal site. In our case, the site urls was something like drupal.example.com and moodle.example.com. For these type of sub-domains, the cookie_domain value can be set like the below one:$cookie_domain = ".example.com";
Note: The dot before "example.com" is necessary.
Let's start with the steps that need to followed for achieving SSO between drupal and moodle:
1. Moodle site
This post explains about creating slideshow in drupal. There are many ways and plugins available to create slideshow in drupal and I am going to discuss some methods which will be very efficient and useful.
1) Using Views slideshow module
2) Using jQuery cSlider plugin
3) Using Bootstrap carousel
1. Using Views slideshow module:
The modules required for this method are:
3) jQuery cycle plugin ( Download here and place it at sites/all/libraries/jquery.cycle/)
Enable the added modules. To create views slideshow, create a new content type for instance "Slideshow" with an image field which can be used as slideshow image.
Add multiple slideshow nodes with images. Then, we have to create a view block with slideshow content. Select "slideshow" as required format and configure transition effect in the Settings link.
After saving this view, place this view block at neccessary region at admin/structure/blocks.
2. Using jQuery cSlider plugin:
1) You can download this plugin from here. There is also a demo file in this plugin which can be used as a reference.
... it's the culmination of a 20-year effort to build a world-class software industry in Poland. Eurogamer chronicles the history for us:
- Witcher dev making two "AAA+" games for 2014/15The Witcher 2 developer CD Projekt is making two blockbuster "AAA+" games for 2014/2015, the company has confirmed to Eurogamer.
- Seeing Red: The story of CD ProjektBut those days can wait, because next year Marcin Iwiński will be 40 years old, and it will be 20 years since his CD Projekt adventure began. He dared in those car parks all those years ago, and he has achieved so much. He did not do it alone, he is at pains to point out - at every twist and turn he had help, be it from Michal Kiciński or his brother Adam Kiciński, or Piotr Nielubowicz or Adam Badowski. Without them and many more he wouldn't be here today, sitting before me, wearing a blue hoodie and jeans and a relaxed stubbly smile, surrounded by a company not only continuing to set an example for Poland, but now the wider world as well.
- The Witcher 3: The Skyrim debate, the game on PS4, nuggets of clarification and a whiff of multiplayerHe pays attention to what The Witcher fans are saying, and the number-one concern he's seen about The Witcher 3 is that fans think the traditionally tight stories of the series will be sacrificed to fit an open world.
Not so. "We don't want to make any compromises in storytelling," he told me. "We simply needed to come up with a larger-scale story. That's it. The world is bigger so we need to fill it with good stories.
- Into the wild: inside The Witcher 3 launchAs if the Polish Prime Minister wasn't enough, day two had begun with the CD Projekt board having presidential breakfast with Bronislaw Komorowski - remarkable, given that I don't believe Adam Badowski has slept just yet. He disappears for a lie down later when a procession of models and cosplayers from last night's festivities work their way through the office to the accompaniment of drums, delivering invitations to everyone for next week's party. Some 250 people, plus partners, will get together and celebrate their collective achievement. "This will be a time for emotions," Iwiński says.
- The Witcher 3 was going to include ice-skating - but it was cut"At some point we had the idea of ice-skating," he said.
"It was more than an idea - it was actually a prototype. You were going to ice-skate and fight..."
I’m writing a replacement for libthread_db. It’s called Infinity.
Why? Because libthread_db is a pain in the ass for debuggers. GDB has to watch for inferiors loading thread libraries. It has to know that, for example, on GNU/Linux, when the inferior loads libpthread.so then GDB has to locate the corresponding libthread_db.so into itself and use that to inspect libpthread’s internal structures. How does GDB know where libthread_db is? It doesn’t, it has to search for it. How does it know, when it finds it, that the libthread_db it found is compatible with the libpthread the inferior loaded? It doesn’t, it has to load it to see, then unload it if it didn’t work. How does GDB know that the libthread_db it found is compatible with itself? It doesn’t, it has to load it and, erm, crash if it isn’t. How does GDB manage when the inferior (and its libthread_db) has a different ABI to GDB? Well, it doesn’t.
libthread_db means you can’t debug an application in a RHEL 6 container with a GDB in a RHEL 7 container. Probably. Not safely. Not without using gdbserver, anyway–and there’s no reason you should have to use gdbserver to debug what is essentially a native process.
So. Infinity. In Infinity, inspection functions for debuggers will be shipped as bytecode in ELF notes in the same file as the code they pertain to. libpthread.so, for example, will contain a bunch of Infinity notes, each representing some bit of functionality that GDB currently gets from libthread_db. When the inferior starts or loads libraries GDB will find the notes in the files it alread loaded and register their functions. If GDB notices it has, for example, the full set of functions it requires for thread support then, boom, thread support switches on. This happens regardless of whether libpthread was dynamically or statically linked.
(If you’re using gdbserver, gdbserver gives GDB a list of Infinity functions it’s interested in. When GDB finds these functions it fires the (slightly rewritten) bytecode over to gdbserver and gdbserver takes it from there.)
Concrete things I have are: a bytecode format (but not the bytecode itself), an executable with a couple of handwritten notes (with some junk where the bytecode should be), a readelf that can decode the notes, a BFD that extracts the notes and a GDB that picks them up.
What I’m doing right now is rewriting a function I don’t understand (td_ta_map_lwp2thr) in a language I’m inventing as I go along (i8) that’ll be compiled with a compiler that barely exists (i8c) into a bytecode that’s totally undefined to be executed by an interpreter that doesn’t exist.
(The compiler’s going to be written in Python, and it’ll emit assembly language. It’s more of an assembler, really. Emitting assembler rather than going straight to bytecode simplifies things (e.g. the compiler won’t need to understand numbers!) at the expense of emitting some slightly crappy code (e.g. instruction sequences that add zero). I’m thinking GDB will eventually JIT the bytecode so this won’t matter. GDB will have to JIT if it’s to cope with millions of threads, but jitted Infinity should be faster than libthread_db. None of this is possible now, but it might be sooner than you thing with the GDB/GCC integration work that’s happening. Besides, I can think of about five different ways to make an interpreter skip null operations in zero time.)
Future applications for Infinity notes include the run-time linker interface.
Watch this space.
How time flies by when you're having fun. Still remember the day my GSoC proposal was selected, like it was yesterday. Now, here we are at the end of the awesome journey.
So what exactly was my project about?
My project was titled "Better Tooling for Baloo". So, exactly is Baloo? Baloo is a framework responsible for file indexing and search for KDE. It is capable of full text indexing and blazing fast queries, while having a small foot print. My project involved writting and improving introspection and control tools for Baloo.
This invlolved improving balooctl the already existing commmand line tools to control baloo. I ended up adding various options to it:
- balooctl status now shows size of the index and state of the indexer i.e. what baloo is upto right now.
- Add option balooctl status [file..] which tells if a file is indexed or not and if it is not indexed, is it scheduled to be indexed. This option will be useful to debug cases in which user cannot find a specific file.
- Add option balooctl monitor which prints out filepaths as they are being indexed. Useful for users which want to know what baloo is doing right now.
- Add option balooctl index [file..] which indexes the specified files. Useful for indexing specific files manually for eg. in case they might be in an excluded directory.
Now comes the major part of the project baloo-monitor, I will explain the it's various features and challenges I faced while implementing them below.Baloo Monitor
As can be seen from the screenshot, the application shows the user baloo's current state, file being indexed, total progress and estimated remaining time and a button to suspend/resume indexing.Challenges
There were loads of challenges I had to face to get there. The first one being the baloofileextractor's behaviour which I mentioned in my first post. Once that was refactored, I had to deal with the internal queue based architecture which was pretty complex and didn't work well for introspection. So the next step was to come up with a new architecture which I failed miserably with and finally realizing how hard it is to design software. Then with loads of help of my mentor, a better architecture was thought of which is what baloo uses now and works quite well for introspection.
Following this it was a matter of figuring out how Inter-process communication works via D-Bus and writting the required code for the monitor, which as easy as it sounds now, was pretty hard for me to get working back then.
Predicting the future is hard. Why do I mention it? I had to come up with an estimated remaining time algorithm for the monitor. The only information available to calculate that without insane amount of overhead was time taken to index past files and files left. Therefore predicting the future. The problem with this is time taken to index different types of files varies wildly for instance if we're indexing a complete e-book thingy vs a simple mp3 file, the former can take more than triple the amount of time.
I tried the simplest approach to begin with, simply average the time taken to index a single batch till now and multiply it with the number of batches. (we index file in batches). That didn't work out quite well, because we could have a situation where the bathes in the beginning were super quick which would drive down the average and the we encounter batches of text heavy files which would slow down the indexing but wouldn't have enough impact on the average time.
The approach I finally used was to keep track of the time taken to index 5 most recent batches of files and use that to calculate a weighted average giving the most recent batch the highest weight. This approach works better than the simple approach as recent batches have a higher impact on average time but can jump around quite a bit. Still this is the approach I currently use, maybe in the future I may come up with a better one.The road ahead
We've decided that baloo-monitor belongs in KInfoCenter and I am in the process of making a KCM for it. All in all this was an amazing experience and the three months that I've learnt the most about various fields of Software Development, from design to implementation. I am going to continue working with KDE for the foreseeable future :).
Thank you KDE and Google for giving me the chance, and a big thanksto my mentor Vishesh Handa for dealing with my, sometimes sloppy, code patiently and poniting out ways to improve.