27 April 2017

Okki

Sortie du script GNOME Layout Manager

Comme on a pu le voir récemment, à l’aide de thèmes et d’extensions, on peut aisément modifier la configuration de GNOME pour qu’il se comporte à la façon d’un Windows, d’un macOS ou d’un Unity.

Un certain nombre de personnes continuent néanmoins de regretter que GNOME ne corresponde pas directement à leurs attentes et qu’il faille passer du temps à l’adapter pour obtenir un environnement qui leur convienne.

Pour ces irréductibles grincheux, Bill Mavromatis a eu la bonne idée de développer un script qui offre à l’utilisateur la possibilité de choisir rapidement entre différentes dispositions de bureau. Le script se chargeant ensuite de télécharger et d’appliquer les bonnes extensions, thèmes et icônes.

GNOME Layout Manager proposant de choisir entre différentes dispositions de bureau

Pour le moment, seules les dispositions Windows, macOS et Unity sont proposées, mais l’auteur est ouvert à toute nouvelle proposition.

Par contre, attention. Le projet étant encore jeune, pour le moment, le script ne nettoie rien au préalable pour partir sur une base propre. D’essayer les différentes dispositions les unes à la suite des autres ne fera que les mélanger.

Plus grave, bien que ce soit prévu pour une future version, il n’y a pour le moment aucun moyen de revenir facilement en arrière. Si vous avez déjà bien personnalisé votre environnement, il vous faudra sans doute passer par une reconfiguration manuelle.

La disposition Unity

Alors, en attendant que le script dispose de fonctionnalités permettant de sauvegarder et de revenir sur notre configuration initiale ou sur un GNOME vanilla, on réservera son utilisation à des systèmes fraîchement installés.

(27 April 2017 à 23:40)

Sortie du thème United GNOME

Dans l’article relatif à la sortie d’Ubuntu GNOME 17.04, j’évoquais un concept art de Jovan Petrović sur ce qu’aurai pu être le futur d’Ubuntu après l’abandon d’Unity au profit de GNOME.

Et il faut croire que ça aura bien plu, puisque en seulement quelques heures, un nouveau thème directement inspiré de ce concept faisait son apparition : United GNOME .

La zone de notification
Quelques fenêtres

Une fois le thème décompressé dans votre dossier .themes (n’oubliez pas qu’il s’agit d’un dossier caché, que vous pouvez créer s’il est absent), vous devez passer par l’Outil de personnalisation (gnome-tweak-tool) pour activer United GNOME en tant que thème GTK+ et Thème du Shell (cette dernière option nécessitant l’extension User Themes).

Et si Unity vous manque déjà, l’auteur conseil l’installation de deux extensions, Dash to Dock et Dynamic Panel Transparency.

Pour pousser le mimétisme jusqu’au bout, dans les options de Dash to Dock, vous pouvez activer le mode barre : étendre aux bords de l’écran, puis positionner le dock sur la gauche. Et pour finir, histoire d’afficher le lanceur d’applications en haut du dock, il vous faut activer l’option Placez le raccourci afficher les applications en première position.

Au besoin, un fond d’écran est également fourni avec le thème. Quant aux icônes, vous pouvez utiliser le thème Moka.

Par contre, attention, le thème est encore jeune et contient quelques bugs d’affichage.

(27 April 2017 à 23:17)

24 April 2017

Dave Neary

Of humans and feelings

It was a Wednesday morning. I just connected to email, to realise that something was wrong with the developer web site. People had been having issues accessing content, and they were upset. What started with “what’s wrong with Trac?” quickly escalated to “this is just one more symptom of how The Company doesn’t care about us community members”.

As I investigated the problem, I realised something horrible. It was all my fault.

I had made a settings change in the Trac instance the night before – attempting to impose some reason and structure in ACLs that had grown organically over time – and had accidentally removed a group, containing a number of community members not working for The Company, from having the access they had.

Oh, crap.

After the panic and cold sweats died down, I felt myself getting angry. These were people who knew me, who I had worked alongside for months, and yet the first reaction for at least a few of them was not to assume this was an honest mistake. It was to go straight to conspiracy theory. This was conscious, deliberate, and nefarious. We may not understand why it was done, but it’s obviously bad, and reflects the disdain of The Company.

Had I not done enough to earn people’s trust?

So I fixed the problem, and walked away. “Don’t respond in anger”, I told myself. I got a cup of coffee, talked about it with someone else, and came back 5 minutes later.

“Look at it from their side”, I said – before I started working with The Company, there had been a strained relationship with the community. Yes, they knew Dave Neary wouldn’t screw them over, but they had no way of knowing that it was Dave Neary’s mistake. I stopped taking it personally. There is deep-seated mistrust, and that takes time to heal, I said to myself.

Yet, how to respond on the mailing list thread? “We apologise for the oversight, blah blah blah” would be interpreted as “of course they fixed it, after they were caught”. But did I really want to put myself out there and admit I had made what was a pretty rookie mistake? Wouldn’t that undermine my credibility?

In the end, I bit the bullet. “I did some long-overdue maintenance on our Trac ACLs yesterday, they’re much cleaner and easier to maintain now that we’ve moved to more clearly defined roles. Unfortunately, I did not test the changes well enough before pushing them live, and I temporarily removed access from all non-The Company employees. It’s fixed now. I messed up, and I am sorry. I will be more careful in the future.” All first person – no hiding behind the corporate identity, no “we stand together”, no sugar-coating.

What happened next surprised me. The most vocal critic in the thread responded immediately to apologise, and to thank me for the transparency and honesty. Within half an hour, a number of people were praising me and The Company for our handling of the incident. The air went out of the outrage balloon, and a potential disaster became a growth opportunity – yes, the people running the community infrastructure are human too, and there is no conspiracy. The Man was not out to get us.

I no longer work for The Company, and the team has scattered to the winds. But I never forgot those cold sweats, that feeling of vulnerability, and the elation that followed the community reaction to a heartfelt mea culpa.

Part of the OSS Communities series – difficult conversations. Contribute your stories and tag them on Twitter with #osscommunities to be included.

(24 April 2017 à 15:55)

18 April 2017

Dave Neary

3 things community managers can learn from the 50 state strategy

This is part of the opensource.com community blogging challenge: Maintaining Existing Community.

There are a lot of parallels between the world of politics and open source development. Open source community members can learn a lot about how political parties cultivate grass-roots support and local organizations, and empower those local organizations to keep people engaged. Between 2005 and 2009, Howard Dean was the chairman of the Democratic National Congress in the United States, and instituted what was known as the “50 state strategy” to grow the Democratic grass roots. That strategy, and what happened after it was changed, can teach community managers some valuable lessons about keeping community contributors. Here are three lessons community managers can learn from it.

Growing grass roots movements takes effort

The 50 state strategy meant allocating rare resources across parts of the country where there was little or no hope of electing a congressman, as well as spending some resources in areas where there was no credible opposition. Every state and electoral district had some support from the national organization. Dean himself travelled to every state, and identified and empowered young, enthusiastic activists to lead local organizations. This was a lot of work, and many senior democrats did not agree with the strategy, arguing that it was more important to focus effort on the limited number of races where the resources could make a difference between winning and losing (swing seats). Similarly, for community managers, we have a limited number of hours in the day, and investing in outreach in areas where we do not have a big community already takes attention away from keeping our current users happy. But growing the community, and keeping community members engaged, means spending time in places where the short-term return on that investment is not clear. Identifying passionate community users and empowering them to create local user groups, or to man a stand aty a small local conference, or speak at a local meet-up helps keep them engaged and feel like part of a greater community, and it also helps grow the community for the future.

Local groups mean you are part of the conversation

Because of the 50 state strategy, every political conversation in the USA had Democratic voices expressing their world-view. Every town hall meeting, local election, and teatime conversation had someone who could argue and defend the Democratic viewpoint on issues of local and national importance. This means that people were aware of what the party stood for, even in regions where that was not a popular platform. It also meant that there was an opportunity to get a feel for how national platform messaging was being received on the ground. And local groups would take that national platform and “adjust” it for a local audience – emphasizing things which were beneficial to the local community. Open source projects also benefit from having a local community presence, by raising awareness of your project to free software enthusiasts who hear about it at conferences and meet-ups. You also have an opportunity to improve your project, by getting feedback from users on their learning curve in adopting and using it. And you have an increasing number of people who can help you understand what messaging resonates with people, and which arguments for adoption are damp squibs which do not get traction, helping you promote your project more effectively.

Regular contact maintains engagement

After Howard Dean finished his term as head of the DNC in 2009, and Debbie Wasserman-Schultz took over as the DNC chair, the 50 state strategy was abandoned, in favour of a more strategic and focussed investment of efforts in swing states. While there are many possible reasons that can be put forward, it is undeniable that the local Democratic party structures which flourished under Dean have lost traction. The Democratic party has lost hundreds of state legislature seats, dozens of state senate seats, and a number of governorships  in “red” states since 2009, in spite of winning the presidency in 2012. The Democrats have lost control of the House and the Senate nationally, in spite of winning the popular vote in 2016 and 2012. For community managers, it is equally important to maintain contact with local user groups and community members, to ensure they feel empowered to act for the community, and to give the resources they need to be successful. In the absence of regular maintenance, community members are less inclined to volunteer their time to promote the project and maintain a local community.

Summary

Growing local user groups and communities is a lot of work, but it can be very rewarding. Maintaining regular contact, empowering new community members to start a meet-up or a user group in their area, and creating resources for your local community members to speak about and promote your project is a great way to grow the community, and also to make life-long friends. Political organizations have a long history of organizing people to buy into a broader vision and support and promote it in their local communities.

What other lessons can community managers and organizers learn from political organizations?

 

(18 April 2017 à 20:52)

27 February 2017

Frédéric Crozat

Hackweek projet: Let's Encrypt DNS-01 validation for acme.sh with Gandi LiveDNS

Last week was SUSE Hackweek and one of my projects was to get Let's Encrypt configured and working on my NAS.

Let's Encrypt is a project aimed at providing SSL certificates for free, in an automated way.

I wanted to get a SSL certificate for my Synology NAS. Synology now supports natively Let's Encrypt but only if the NAS accepts incoming HTTP / HTTPS connections (which is not always what you want).

Fortunately, the protocol used by Let's Encrypt to validate a hostname (and generate a certificate), Automatic Certificate Management Environment (ACME) has a alternative validation path, DNS-01, based on DNS.

DNS-01 requires access to your DNS server, so you can add a validation token used by Let's Encrypt server, to ensure you own the domain name you are requesting a certificate for.

There is a lot of ACME implementations, but very few supports DNS-01 validation with my DNS provider (gandi.net).

I ended-up using acme.sh, fully written in shell script and tried to plug Gandi DNS support in it.

After some tests, I discovered Gandi current DNS service is not allowing fast changing DNS zone informations (which is somehow a requirement for DNS-01 validation). Fortunately, Gandi is now providing a new LiveDNS server, available in beta, with a RESTful HTTP API.

I was able to get it working quite rapidly with curl, and once the prototype was working, I've cleaned everything and created a pull request for integrating the support in acme.sh.

Now, my NAS has its own Let's Encrypt certificate and will update it every 90 days automatically. Getting and installing a certificate for another server (running openSUSE Leap) only took me 5 minutes.

This was a pretty productive hackweek !

(27 February 2017 à 17:04)

01 February 2017

Luis Menina

FOSDEM 2017

Février arrive, et le FOSDEM aussi ! L'occasion de revoir des amis, boire des bières et manger des carbonnades flamandes. Tiens, justement une recette qui aurait toute sa place dans GNOME Recipes. Cette nouvelle application aura d'ailleurs une petite conférence de présentation (20min). N'hésitez pas non plus à passer au stand GNOME poser vos questions ou faire un coucou, ou prendre des stickers ;)

(01 February 2017 à 17:49)

15 December 2016

Bastien Nocera

Making your own retro keyboard

We're about a week before Christmas, and I'm going to explain how I created a retro keyboard as a gift to my father, who introduced me to computers when he brought back a Thomson TO7 home, all the way back in 1985.

The original idea was to use a Thomson computer to fit in a smaller computer, such as a CHIP or Raspberry Pi, but the software update support would have been difficult, the use limited to the builtin programs, and it would have required a separate screen. So I restricted myself to only making a keyboard. It was a big enough task, as we'll see.

How do keyboards work?

Loads of switches, that's how. I'll point you to Michał Trybus' blog post « How to make a keyboard - the matrix » for details on this works. You'll just need to remember that most of the keyboards present in those older computers have no support for xKRO, and that the micro-controller we'll be using already has the necessary pull-up resistors builtin.

The keyboard hardware

I chose the smallest Thomson computer available for my project, the MO5. I could have used a stand-alone keyboard, but would have lost all the charm of it (it just looks like a PC keyboard), some other computers have much bigger form factors, to include cartridge, cassette or floppy disk readers.

The DCMoto emulator's website includes tons of documentation, including technical documentation explaining the inner workings of each one of the chipsets on the mainboard. In one of those manuals, you'll find this page:



Whoot! The keyboard matrix in details, no need for us to discover it with a multimeter.

That needs a wash in soapy water

After opening up the computer, and eventually giving the internals, and the keyboard especially if it has mechanical keys, a good clean, we'll need to see how the keyboard is connected.

Finicky metal covered plastic

Those keyboards usually are membrane keyboards, with pressure pads, so we'll need to either find replacement connectors at our local electronics store, or desolder the ones on the motherboard. I chose the latter option.

Desoldered connectors

After matching the physical connectors to the rows and columns in the matrix, using a multimeter and a few key presses, we now know which connector pin corresponds to which connector on the matrix. We can start soldering.

The micro-controller

The micro-controller in my case is a Teensy 2.0, an Atmel AVR-based micro-controller with a very useful firmware that makes it very very difficult to brick. You can either press the little button on the board itself to upload new firmware, or wire it to an external momentary switch. The funny thing is that the Atmega32U4 is 16 times faster than the original CPU (yeah, we're getting old).

I chose to wire it to the "Initial. Prog" ("Reset") button on the keyboard, so as to make it easy to upload new firmware. To do this, I needed to cut a few traces coming out of the physical switch on the board, to avoid interferences from components on the board, using a tile cutter. This is completely optional, and if you're only going to use firmware that you already know at least somewhat works, you can set a key combo to go into firmware upload mode in the firmware. We'll get back to that later.

As far as connecting and soldering to the pins, we can use any I/O pins we want, except D6, which is connected to the board's LED. Note that any deviation from the pinout used in your firmware, you'd need to make changes to it. We'll come back to that again in a minute.

The soldering

Colorful tinning

I wanted to keep the external ports full, so it didn't look like there were holes in the case, but there was enough headroom inside the case to fit the original board, the teensy and pins on the board. That makes it easy to rewire in case of error. You could also dremel (yes, used as a verb) a hole in the board.

As always, make sure early that things would fit, especially the cables!

The unnecessary pollution

The firmware

Fairly early on during my research, I found the TMK keyboard firmware, as well as very well written forum post with detailed explanations on how to modify an existing firmware for your own uses.

This is what I used to modify the firmware for the gh60 keyboard for my own use. You can see here a step-by-step example, implementing the modifications in the same order as the forum post.

Once you've followed the steps, you'll need to compile the firmware. Fedora ships with the necessary packages, so it's a simple:


sudo dnf install -y avr-libc avr-binutils avr-gcc

I also compiled and installed in my $PATH the teensy_cli firmware uploader, and fixed up the udev rules. And after a "make teensy" and a button press...

It worked first time! This is a good time to verify that all the keys work, and you don't see doubled-up letters because of short circuits in your setup. I had 2 wires touching, and one column that just didn't work.

I also prepared a stand-alone repository, with a firmware that uses the tmk_core from the tmk firmware, instead of modifying an existing one.

Some advices

This isn't the first time I hacked on hardware, but I'll repeat some old adages, and advices, because I rarely heed those warnings, and I regret...
  • Don't forget the size, length and non-flexibility of cables in your design
  • Plan ahead when you're going to cut or otherwise modify hardware, because you might regret it later
  • Use breadboard cables and pins to connect things, if you have the room
  • Don't hotglue until you've tested and retested and are sure you're not going to make more modifications
That last one explains the slightly funny cabling of my keyboard.

Finishing touches

All Sugru'ed up

To finish things off nicely, I used Sugru to stick the USB cable, out of the machine, in place. And as earlier, it will avoid having an opening onto the internals.

There are a couple more things that I'll need to finish up before delivery. First, the keymap I have chosen in the firmware only works when a US keymap is selected. I'll need to make a keymap for Linux, possibly hard-coding it. I will also need to create a Windows keymap for my father to use (yep, genealogy software on Linux isn't quite up-to-par).

Prototype and final hardware

All this will happen in the aforementioned repository. And if you ever make your own keyboard, I'm happy to merge in changes to this repository with documentation for your Speccy, C64, or Amstrad CPC hacks.

(If somebody wants to buy me a Sega keyboard, I'll gladly work on a non-destructive adapter. Get in touch :)

(15 December 2016 à 16:48)

15 November 2016

Bastien Nocera

Lyon GNOME Bug day #1

Last Friday, both a GNOME bug day and a bank holiday, a few of us got together to squash some bugs, and discuss GNOME and GNOME technologies.

Guillaume, a new comer in our group, tested the captive portal support for NetworkManager and GNOME in Gentoo, and added instructions on how to enable it to their Wiki. He also tested a gateway related configuration problem, the patch for which I merged after a code review. Near the end of the session, he also rebuilt WebKitGTK+ to test why Google Docs was not working for him anymore in Web. And nobody believed that he could build it that quickly. Looks like opinions based on past experiences are quite hard to change.

Mathieu worked on removing jhbuild's .desktop file as nobody seems to use it, and it was creating the Sundry category for him, in gnome-shell. He also spent time looking into the tracker blocker that is Mozilla's Focus, based on disconnectme's block lists. It's not as effective as uBlock when it comes to blocking adverts, but the memory and performance improvements, and the slow churn rate, could make it a good default blocker to have in Web.

Haïkel looked into using Emeus, potentially the new GTK+ 4.0 layout manager, to implement the series properties page for Videos.

Finally, I added Bolso to jhbuild, and struggled to get gnome-online-accounts/gnome-keyring to behave correctly in my installation, as the application just did not want to log in properly to the service. I also discussed Fedora's privacy policy (inappropriate for Fedora Workstation, as it doesn't cover the services used in the default installation), a potential design for Flatpak support of joypads and removable devices in general, as well as the future design of the Network panel.

(15 November 2016 à 09:48)

03 September 2016

Frédéric Péters

GUADEC 2016, Karlsruhe

Our annual gathering of GNOMies took place in sunny Karslruhe earlier this month and as usual it was great to meet again, and this year again GUADEC was the perfect reminder of the "GNOME is people" spirit.

GUADEC 2016 poster in Karlsruhe

Nice thing this year was that almost everyone was staying in the same place, or close; this favoured social gatherings even more than in the previous years. This was also helped by the organized events, every evenings, from barbecue to picnic, from local student-run bar to beer garden (thanks Centricular), and more.

And during the days? Interesting talks of course, like the one offered by Rosanna about how the foundation runs (and how crazy is the US bank system), or the Builder update by Christian, and team meetings.

Release team meeting

Release team meeting by the pool

Thanks again to the GNOME foundation for supporting travels and accomodations for lots of persons (including me), and to the organizing committee, you made one great GUADEC.

/files/sponsored-badge-shadow.png

(03 September 2016 à 13:50)

04 August 2016

Olivier Crête

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

(04 August 2016 à 03:25)

25 July 2016

Luis Menina

Karlsruhe, here we go !

Because I'm missing my GNOME friends, and because GUADEC is the best conference in the world :)

(25 July 2016 à 20:01)

20 June 2016

Guillaume Desmottes

GStreamer leaks tracer

Here at Collabora we are pretty interested at improving QA tools in GStreamer. Thibault is for example doing a great job on gst-validate ensuring that a lot of code paths are regularly tested using real life scenarios. Last year I added Valgrind support to gst-validate allowing us to automatically detect memory leaks in test scenarios. My goal was to integrate this as part of GStreamer's automatic QA to prevent memory leak regressions. While this can sometimes be a good approach to track leaks it has a few downsides:

  • Valgrind can be very CPU and/or memory consuming which can be a problem with longer scenarios or on limited hardware such as embedded devices.
  • As a result running the full tests suite with valgrind can take ages.
  • Valgrind checks for any potential memory leak which can lead to a lot of false positives or leaks in low level system libraries on which we have few control. We usually work around this problem using suppression files but they are generally very fragile and depend a lot on the system/distribution which has been used for testing.

I tried to solve these issues by trying a new approach using GstTracer. Tracers are a new mechanism introduced in GStreamer 1.8 allowing tools to hook into GStreamer internals and collect data. So I started by adding tracer hooks when GstObject and GstMiniObject are created and destroyed. Then I implemented a new tracer tracking the lifetime of (mini)objects and listing those which are still alive when the application is exiting. This worked pretty well but I needed a way to discard objects which are intentionally leaked (false positives). To do so I introduced a new (mini)object flag allowing us to mark such objects.

I'm pretty happy with the result, while proof testing this tool I found and fixed dozens of leaks into Gstreamer (core, plugins and tests). Some of those fixes have already reached the 1.8.2 release. It's also very easy to use and doesn't require any external tool unlike Valgrind (which can be tricky to integrate on some platforms).

To use it you just have to load the leaks tracer with your application and enable tracer logs:

GST_TRACERS="leaks" GST_DEBUG="GST_TRACER:7" gst-launch-1.0 videotestsrc num-buffers=10 ! fakesink

You can also filter out the types of GstObject or GstMiniObject tracked to reduce memory consumption:

GST_TRACERS="leaks(GstEvent,GstMessage)"  GST_DEBUG="GST_TRACER:7" gst-launch-1.0 videotestsrc num-buffers=10 ! fakesink

This tracer has recently be merged into GStreamer core and will be part of the 1.9.1 release.

As future enhancements I implemented live tracking and checkpointing support using signals like I already did in gobject-list a while ago. I'd also like to be able to display the creation stack trace of leaked objects to easily spot the leaked instances. Finally, I opened a bug to discuss the integration of the tracer with the QA system.

(20 June 2016 à 10:03)

25 May 2016

Olivier Crête

GStreamer Spring Hackfest 2016

After missing the last few GStreamer hackfests I finally managed to attend this time. It was held in Thessaloniki, Greece’s second largest city. The city is located by the sea side and the entire hackfest and related activities were either directly by the sea or just a couple blocks away.

Collabora was very well represented, with Nicolas, Mathieu, Lubosz also attending.

Nicolas concentrated his efforts on making kmssink and v4l2dec work together to provide zero-copy decoding and display on a Exynos 4 board without a compositor or other form of display manager. Expect a blog post soon  explaining how to make this all fit together.

Lubosz showed off his VR kit. He implemented a viewer for planar point clouds acquired from a Kinect. He’s working on a set of GStreamer plugins to play back spherical videos. He’s also promised to blog about all this soon!

Mathieu started the hackfest by investigating the intricacies of Albanian customs, then arrived on the second day in Thessaloniki and hacked on hotdoc, his new fancy documentation generation tool. He’ll also be posting a blog about it, however in the meantime you can read more about it here.

As for myself, I took the opportunity to fix a couple GStreamer bugs that really annoyed me. First, I looked into bug #766422: why glvideomixer and compositor didn’t work with RTSP sources. Then I tried to add a ->set_caps() virtual function to GstAggregator, but it turns out I first needed to delay all serialized events to the output thread to get predictable outcomes and that was trickier than expected. Finally, I got distracted by a bee and decided to start porting the contents of docs.gstreamer.com to Markdown and updating it to the GStreamer 1.0 API so we can finally retire the old GStreamer.com website.

I’d also like to thank Sebastian and Vivia for organising the hackfest and for making us all feel welcomed!

GStreamer Hackfest Venue

(25 May 2016 à 20:43)

03 April 2016

Didier Roche

ubuntu booth and conferences at jdll 2016

The "Journée Du Logiciel Libre" are a very nice event in Lyon (France) over a full week-end where the public is invited to come, talk and assist conferences around free software.

Of course, the Ubuntu-fr team is present and have a nice booth here.

jdll-stand.jpg

I'm also present and give a talk about snappy Ubuntu Core against a full attendance room!

jdll-snappy-conference.JPG

Followed by an hour workshop more focused on developers. A lot of discussions and interesting interactions here! jdll-atelier.jpg

That was a blast, thanks to everyone who attended! I'm still around tomorrow, do not hesitate to stop at the Ubuntu booth and have a chat!

(03 April 2016 à 10:17)

30 March 2016

Didier Roche

Ubuntu Make 16.03 features Eclipse JEE, Intellij EAP, Kotlin and a bunch of fixes!

I'm really delighted to announce a new Ubuntu Make release, scoring 16.03, bringing updates for a bunch of frameworks while introducing new support!

uld_logo

I'm also really proud as this new release features three new awesome contributors: Tankypon, adding the Superpowers game editor framework, Eakkapat Pattarathamrong, adding more tests for Visual Studio Code, and Almeida, doing some great updates to the portuguese translations!

The returning awesome work from Galileo Sartor an Omer Sheikh got us new Eclipse JEE installation support, IntelliJ IDEA EAP and Kotlin compiler. In addition to those new features, we have a lot fixes for Unity3D, Android-NDK, Clang, Visual Studio Code and Intellij-based IDEs as the server counter-part changed. The usual polish and a bunch of additional smaller incremental improvements joined the party as well! If you are interested into the nifty details, you can head over the change log.

If you can't wait to try it, grab this latest version direcly through its ppa for the 14.04 LTS, 15.10 ubuntu and xenial releases. This release wouldn't have been possible without our awesome contributors community, thanks to them again!

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!

(30 March 2016 à 09:00)

05 February 2016

Dodji Seketeli

abipkgdiff

07 October 2015

Frédéric Crozat

We are hiring !

Did you knew SUSE is hiring ?



I've just looked at our counter today (October 7, 2015) and we have 68 opened positions.

Moreover, we have two positions which might interest people who are reading this blog through Planet GNOME(-FR):
Interested ? Apply !

(07 October 2015 à 16:23)

24 September 2015

Frédéric Péters

GNOME 3.18 is out!

We left codenames and macaques years ago but this year at GUADEC came the idea of a small gift to the GUADEC and GNOME.Asia teams, they do an amazing work, and here we are, the GNOME 3.18 release has been named "Gothenburg" as a token of recognition for this year's GUADEC team.

GUADEC is an important moment in the life of the GNOME project, this is where we gather and assert we are foremost a community of dedicated persons, all working to produce the best computing environment we can. It is with that point in mind that I will nevertheless use this space to pinpoint a single person and congratulate him for all the work he put in 3.18.

Let me introduce him, Carlos Soriano. Round of applause please.

Files (né Nautilus) is a key part of the desktop and has a very long history but he was not intimidated and helped by the designers and other fellow developers, Carlos put a massive amount of work into it this cycle and the result is simply fantastic. Again when I was taking some screenshots for the release notes I was amazed by all the small details, like the way a "New Bookmark" entry slides when dragging an item over the sidebar.

"New bookmark" entry in Files sidebar

Voila. Carlos also did many other things. GNOME 3.18 also has many other things. This was my highlight.

Thanks to everyone involved.

(24 September 2015 à 07:45)

09 August 2015

Guillaume Mazoyer

Juniper vSRX on Proxmox VE

Juniper provides a JUNOS, based on the one used by the SRX series, than can be used in a virtual machine. That product is great for Juniper users that want to play with their favorite network OS and also for people who would like to discover the JUNOS world.

Juniper is providing images for VMware and KVM based hypervisors. As Proxmox VE user you know that it uses KVM to get things done. So, having Firefly Perimeter working on Proxmox VE should be doable without much troubles. But here are the steps to get things working.

Downloading vSRX (Firefly Perimeter)

To setup vSRX on Proxmox VE we need to download the JVA file provided by Juniper. This file is an archive containing the KVM VM definition and the QCOW2 disk of the VM.

Preparing the VM

We then need to create a VM with the following characteristics (see also the end of this article):

  • OS: Other OS types (other)
  • CD/DVD: Do not use any media
  • Hard Disk: VIRTIO0 or IDE0, size of 2 GB, QCOW2 format
  • CPU: at least 2 sockets and 1 core, type KVM64 (default on latest versions of Proxmox VE)
  • Memory: 1024 MB are recommended (but 2048 MB should be better)
  • Network: maximum of 10 interfaces, use VIRTIO or Intel E1000 as model for interfaces

Using the vSRX Disk

Now that the VM definition has been created, we need to use the disk provided in the JVA file. For that we first need to extract it.

# bash junos-vsrx-12.1X47-D10.4-domestic.jva -x

The disk will be available in the directory that has been created. We justneed to copy the disk to replace the one used by the VM (replace VMID by the ID of your VM).

# cp junos-vsrx-12.1X47-D10.4-domestic.img /var/lib/vz/images/VMID/vm-VMID-1.qcow2

With this, the VM is now bootable and JUNOS will load properly, we will not be able to use it though. For that we need to find a way to send the serial output to the Proxmox VE's noVNC console.

Getting the serial output in the Proxmox VE console

First we need to find where our VM definition is stored. Usually it is under /etc/pve/nodes/NODENAME/qemu-server/VMID.conf (replace NODENAME and VMID with your owns). But we can use a command like the following:

# find / -name 'VMID.conf'

Then we can edit the VM definition file:

# vim /etc/pve/nodes/NODENAME/qemu-server/VMID.conf

And we have to add the following line in the configuration:

args: -serial tcp:localhost:6000,server,nowait

And eventually, we need to change the VM display to use Cirrus Logic GD 5446 (cirrus) via the Proxmox VE web interface or just by adding vga: cirrus in the VM definition.

The End

We can now just start the VM, the output will be displayed in the Proxmox VE's console. Enjoy using JUNOS with virtual machines.

vsrx-promox.png

Edit (2015-06-17):

After some tests I was glad to see that both disk and network interfaces can use the VIRTIO drivers. I would recommend to use this type of drivers since it is supposed to improve the scheduling on the hypervisor level.

(09 August 2015 à 16:35)

02 June 2015

Pascal Terjan

Tuning systemd services

Recently my tor relay started crashing daily. I found out it was because the usage increased (approaching 10MB/s) and every night when logrotate asked it to reload, it failed with:

[May 30 04:02:01.000 [notice] Received reload signal (hup). Reloading config and resetting internal state.
May 30 04:02:01.000 [warn] Could not open "/etc/tor/torrc": Too many open files
May 30 04:02:01.000 [warn] Unable to open configuration file "/etc/tor/torrc".
May 30 04:02:01.000 [err] Reading config failed--see warnings above. For usage, try -h.
May 30 04:02:01.000 [warn] Restart failed (config error?). Exiting.
May 30 04:02:01.000 [warn] Couldn't open "/var/lib/tor/state.tmp" (/var/lib/tor/state) for writing: Too many open files

The problems comes from LimitNOFILE=4096 in the service file, and I had no idea how to fix it cleanly.

fcrozat gave me the answer which I'll summarize as:

mkdir /etc/systemd/system/tor.service.d/
echo [Service]\nLimitNOFILE=16384" > /etc/systemd/system/tor.service.d/limit.conf
systemctl daemon-reload
service tor restart

(02 June 2015 à 10:36)

25 May 2015

Vincent Untz

SUSE Ruling the Stack in Vancouver

Rule the Stack

Last week during the the OpenStack Summit in Vancouver, Intel organized a Rule the Stack contest. That's the third one, after Atlanta a year ago and Paris six months ago. In case you missed earlier episodes, SUSE won the two previous contests with Dirk being pretty fast in Atlanta and Adam completing the HA challenge so we could keep the crown. So of course, we had to try again!

For this contest, the rules came with a list of penalties and bonuses which made it easier for people to participate. And indeed, there were quite a number of participants with the schedule for booking slots being nearly full. While deploying Kilo was a goal, you could go with older releases getting a 10 minutes penalty per release (so +10 minutes for Juno, +20 minutes for Icehouse, and so on). In a similar way, the organizers wanted to see some upgrade and encouraged that with a bonus that could significantly impact the results (-40 minutes) — nobody tried that, though.

And guess what? SUSE kept the crown again. But we also went ahead with a new challenge: outperforming everyone else not just once, but twice, with two totally different methods.

For the super-fast approach, Dirk built again an appliance that has everything pre-installed and that configures the software on boot. This is actually not too difficult thanks to the amazing Kiwi tool and all the knowledge we have accumulated through the years at SUSE about building appliances, and also the small scripts we use for the CI of our OpenStack packages. Still, it required some work to adapt the setup to the contest and also to make sure that our Kilo packages (that were brand new and without much testing) were fully working. The clock result was 9 minutes and 6 seconds, resulting in a negative time of minus 10 minutes and 54 seconds (yes, the text in the picture is wrong) after the bonuses. Pretty impressive.

But we also wanted to show that our product would fare well, so Adam and I started looking at this. We knew it couldn't be faster than the way Dirk picked, and from the start, we targetted the second position. For this approach, there was not much to do since this was similar to what he did in Paris, and there was work to update our SUSE OpenStack Cloud Admin appliance recently. Our first attempt failed miserably due to a nasty bug (which was actually caused by some unicode character in the ID of the USB stick we were using to install the OS... we fixed that bug later in the night). The second attempt went smoother and was actually much faster than we had anticipated: SUSE OpenStack Cloud deployed everything in 23 minutes and 17 seconds, which resulted in a final time of 10 minutes and 17 seconds after bonuses/penalties. And this was with a 10 minutes penalty due to the use of Juno (as well as a couple of minutes lost debugging some setup issue that was just mispreparation on our side). A key contributor to this result is our use of Crowbar, which we've kept improving over time, and that really makes it easy and fast to deploy OpenStack.

Wall-clock time for SUSE OpenStack Cloud

Wall-clock time for SUSE OpenStack Cloud

These two results wouldn't have been possible without the help of Tom and Ralf, but also without the whole SUSE OpenStack Cloud team that works on a daily basis on our product to improve it and to adapt it to the needs of our customers. We really have an awesome team (and btw, we're hiring)!

For reference, three other contestants succeeded in deploying OpenStack, with the fastest of them ending at 58 minutes after bonuses/penalties. And as I mentioned earlier, there were even more contestants (including some who are not vendors of an OpenStack distribution), which is really good to see. I hope we'll see even more in Tokyo!

Results of the Rule the Stack contest

Results of the Rule the Stack contest

Also thanks to Intel for organizing this; I'm sure every contestant had fun and there was quite a good mood in the area reserved for the contest.

Update: See also the summary of the contest from the organizers.

(25 May 2015 à 22:58)

12 May 2015

Vincent Untz

Deploying Docker for OpenStack with Crowbar

A couple of months ago, I was meeting colleagues of mine working on Docker and discussing about how much effort it would be to add support for it to SUSE OpenStack Cloud. It's been something that had been requested for a long time by quite a number of people and we never really had time to look into it. To find out how difficult it would be, I started looking at it on the evening; the README confirmed it shouldn't be too hard. But of course, we use Crowbar as our deployment framework, and the manual way of setting it up is not really something we'd want to recommend. Now would it be "not too hard" or just "easy"? There was only way to know that... And guess what happened next?

It took a couple of hours (and two patches) to get this working, including the time for packaging the missing dependencies and for testing. That's one of the nice things we benefit from using Crowbar: adding new features like this is relatively straight-forward, and so we can enable people to deploy a full cloud with all of these nice small features, without requiring them to learn about all the technologies and how to deploy them. Of course this was just a first pass (using the Juno code, btw).

Fast-forward a bit, and we decided to integrate this work. Since it was not a simple proof of concept anymore, we went ahead with some more serious testing. This resulted in us backporting patches for the Juno branch, but also making Nova behave a bit better since it wasn't aware of Docker as an hypervisor. This last point is a major problem if people want to use Docker as well as KVM, Xen, VMware or Hyper-V — the multi-hypervisor support is something that really matters to us, and this issue was actually the first one that got reported to us ;-) To validate all our work, we of course asked tempest to help us and the results are pretty good (we still have some failures, but they're related to missing features like volume support).

All in all, the integration went really smoothly :-)

Oh, I forgot to mention: there's also a docker plugin for heat. It's now available with our heat packages now in the Build Service as openstack-heat-plugin-heat_docker (Kilo, Juno); I haven't played with it yet, but this post should be a good start for anyone who's curious about this plugin.

(12 May 2015 à 08:41)

15 April 2015

Damien Sandras

Be IP is hiring!

In case some readers of this blog would be interested in working with Open Source software and VoIP technologies, Be IP (http://www.beip.be) is hiring a developer. Please see http://www.beip.be/BeIP-Job-Offer.pdf for the job description. You can contact me directly.

(15 April 2015 à 09:58)

08 April 2015

Dodji Seketeli

GNU Cauldron 2015

This year the GNU Cauldron Conference is going to be held in Prague, Czech Republic, from August 7 to 9, 2015.

The GNU Cauldron Conference is a gathering of users and hackers of the GNU toolchain ecosystem.

Meaning that if you are interested in projects remotely related to the GNU C library, GNU Compiler Collection, the GNU Debugger or any toolchain runtime related project that has ties with the GNU system you are welcome!

If you are a Free Software project that is using the GNU Toolchain, would like your voice to be heard, hang out with other users and hackers of that space you are even more than welcome! If yo have crazy ideas you'd like to discuss over a nice beverage of your choice, please join!

You just have to send a nice note to tools-cauldron-admin@googlegroups.com saying that you are coming, and that would act as a registration. The number of seats is limited, so please do not drag your feet too much :-)

And if you want present a talk, well, there is a call for paper under way. You just have to sent your abstract to tools-cauldron-admin@googlegroups.com. The exact call for paper can be read here.

So see you there, gals'n guys!

(08 April 2015 à 09:48)

30 March 2015

Guillaume Desmottes

Tracking the reference count of a GstMiniObject using gdb

As part of my work at Collabora, I'm currently adding Valgrind support to the awesome gst-validate tool. The ultimate goal is to run our hundreds of GStreamer tests inside Valgrind as part of the existing QA infrastructure to automatically track memory related regressions (invalid reads, leaks, etc).

Most of the gst-validate changes have already landed and can be very easily used by passing the --valgrind argument to gst-validate-launcher. I'm now focusing on making sure most of our existing tests are passing with Valgrind which means doing quite a lot of memory leaks debugging (everyone love doing those right?).

A lot of GStreamer types are based on GstMiniObject instead of the usual GObject. It makes a lot of sense from a performance pov but can make tracking ref count issues harder as we can't rely on tools such as RefDbg or gobject-list.

I was tracking one GstMiniObject leak today and was looking for a way to get a trace each time its reference is modified. We can use GST_DEBUG="GST_REFCOUNTING:9" to get logs each time the object is reffed/unreffed but I was actually interested in the full stack trace. With to the help of the French gang (kudos to Dodji, Bastien and Christophe!) I managed to do so using this good old gdb.

First thing is to break when the object you want to track is created, you can either do this by using in gdb b mysource:line or just add a G_BREAKPOINT() in your source code. Start your app with gdb as usual then uses:

set logging on
set pagination off

The ouput can be pretty long so this will ensure that logs are saved to a file (gdb.txt by default) and that gdb won't bother you asking for confirmation before printing ouput. Now start your app (run) and once it has paused use the following command:

watch -location ((GstMiniObject*)caps)->refcount

caps is the name of the instance of the object I want to track, as defined in the scope where I installed my breakpoint; update it to match yours. This command adds a watchpoint on the refcount of the object, that means gdb will now stop each time its value is modified. The -location option ensures that gdb watches the memory associated with the expression, not using it would limit us to the local scope of the variable.

Now we want to display a backtrace each time gdb pauses when this watchpoint is hit. This is done using commands:

commands
bt
continue
end

All the gdb instructions between commands and end will be automatically executed each time gdb pauses because of the watchpoint we just defined. In our case we first want to display a stack trace (bt) and then continue the execution of the program.

We are now all set, we just have to ask gdb to resume the normal execution of the program we are debugging:

continue

This should generate a gdb.txt log file containing something like:

Old value = 1
New value = 2
gst_mini_object_ref (mini_object=0x7ffff001c8f0) at gstminiobject.c:362
362	  return mini_object;
#0  0x00007ffff6f384ed in gst_mini_object_ref (mini_object=0x7ffff001c8f0) at gstminiobject.c:362
#1  0x00007ffff6f38b00 in gst_mini_object_replace (olddata=0x7ffff67f0c58, newdata=0x7ffff001c8f0) at gstminiobject.c:501
#2  0x00007ffff72573ed in gst_caps_replace (old_caps=0x7ffff67f0c58, new_caps=0x7ffff001c8f0) at ../../../gst/gstcaps.h:312
#3  0x00007ffff72578fa in helper_find_suggest (data=0x7ffff67f0c30, probability=GST_TYPE_FIND_MAXIMUM, caps=0x7ffff001c8f0) at gsttypefindhelper.c:230
#4  0x00007ffff6f7d606 in gst_type_find_suggest_simple (find=0x7ffff67f0bf0, probability=100, media_type=0x7ffff5de8a66 "video/mpegts", fieldname=0x7ffff5de8a4e "systemstream") at gsttypefind.c:197
#5  0x00007ffff5ddbec3 in mpeg_ts_type_find (tf=0x7ffff67f0bf0, unused=<optimized out>) at gsttypefindfunctions.c:2381
#6  0x00007ffff6f7dbc7 in gst_type_find_factory_call_function (factory=0x6dbbc0 [GstTypeFindFactory], find=0x7ffff67f0bf0) at gsttypefindfactory.c:215
#7  0x00007ffff7257d83 in gst_type_find_helper_get_range (obj=0x7e8280 [GstProxyPad], parent=0x7d0500 [GstGhostPad], func=0x7ffff6f2aa37 <gst_proxy_pad_getrange_default>, size=10420224, extension=0x7ffff00010e0 "MTS", prob=0x7ffff67f0d04) at gsttypefindhelper.c:355
#8  0x00007ffff683fd43 in gst_type_find_element_loop (pad=0x7e47d0 [GstPad]) at gsttypefindelement.c:1064
#9  0x00007ffff6f79895 in gst_task_func (task=0x7ef050 [GstTask]) at gsttask.c:331
#10 0x00007ffff6f7a971 in default_func (tdata=0x61ac70, pool=0x619910 [GstTaskPool]) at gsttaskpool.c:68
#11 0x0000003ebb070d68 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:307
#12 0x0000003ebb0703d5 in g_thread_proxy (data=0x7ceca0) at gthread.c:764
#13 0x0000003eb880752a in start_thread (arg=0x7ffff67f1700) at pthread_create.c:310
#14 0x0000003eb850022d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
Hardware watchpoint 1: -location ((GstMiniObject*)caps)->refcount

Old value = 2
New value = 1
0x00007ffff6f388b6 in gst_mini_object_unref (mini_object=0x7ffff001c8f0) at gstminiobject.c:442
442	  if (G_UNLIKELY (g_atomic_int_dec_and_test (&mini_object->refcount))) {
#0  0x00007ffff6f388b6 in gst_mini_object_unref (mini_object=0x7ffff001c8f0) at gstminiobject.c:442
#1  0x00007ffff6f7d027 in gst_caps_unref (caps=0x7ffff001c8f0) at ../gst/gstcaps.h:230
#2  0x00007ffff6f7d615 in gst_type_find_suggest_simple (find=0x7ffff67f0bf0, probability=100, media_type=0x7ffff5de8a66 "video/mpegts", fieldname=0x7ffff5de8a4e "systemstream") at gsttypefind.c:198
#3  0x00007ffff5ddbec3 in mpeg_ts_type_find (tf=0x7ffff67f0bf0, unused=<optimized out>) at gsttypefindfunctions.c:2381
#4  0x00007ffff6f7dbc7 in gst_type_find_factory_call_function (factory=0x6dbbc0 [GstTypeFindFactory], find=0x7ffff67f0bf0) at gsttypefindfactory.c:215
#5  0x00007ffff7257d83 in gst_type_find_helper_get_range (obj=0x7e8280 [GstProxyPad], parent=0x7d0500 [GstGhostPad], func=0x7ffff6f2aa37 <gst_proxy_pad_getrange_default>, size=10420224, extension=0x7ffff00010e0 "MTS", prob=0x7ffff67f0d04) at gsttypefindhelper.c:355
#6  0x00007ffff683fd43 in gst_type_find_element_loop (pad=0x7e47d0 [GstPad]) at gsttypefindelement.c:1064
#7  0x00007ffff6f79895 in gst_task_func (task=0x7ef050 [GstTask]) at gsttask.c:331
#8  0x00007ffff6f7a971 in default_func (tdata=0x61ac70, pool=0x619910 [GstTaskPool]) at gsttaskpool.c:68
#9  0x0000003ebb070d68 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:307
#10 0x0000003ebb0703d5 in g_thread_proxy (data=0x7ceca0) at gthread.c:764
#11 0x0000003eb880752a in start_thread (arg=0x7ffff67f1700) at pthread_create.c:310
#12 0x0000003eb850022d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
Hardware watchpoint 1: -location ((GstMiniObject*)caps)->refcount

If you see this kind of error in your log

Error evaluating expression for watchpoint 1 value has been optimized out

this means you may have to rebuild your application and/or its libraries without any optimization. This is pretty easy with autotools:

make clean
CFLAGS="-g3 -ggdb3 -O0" make

Now all you need is to grab a good cup of tea and start digging through this log to find your leak. Good fun!

Edit

Daniel pointed out to me that watchpoints can be pretty unreliable in gdb. So here is another version where we ask gdb to break when our object is reffed/unreffed. You just have to figure out its address using gdb or simply by printing the value of its pointer.

b gst_mini_object_ref if (mini_object == 0xdeadbeef)
b gst_mini_object_unref if (mini_object == 0xdeadbeef)
commands 1 2
bt
cont
end

(30 March 2015 à 17:56)

29 January 2015

Guillaume Mazoyer

Samsung 840 EVO Performance fix

Several weeks ago Samsung has released a fix for the 840 series of their SSDs that had performance issues on long time stored data. While the fix procedure is quite simple to apply on Windows, when you use your SSD on a GNU/Linux powered system it can be quite tricky. So to fix your SSD you will need a bootable USB key with the Samsung binaries. Moreover the Samsung documentation is not really well written and can lead to confusion. So here are the steps to fix dear GNU/Linux users' SSDs.

Some preps

Firstly, prepare a USB key (at least 512 MB, just to be sure) and download FreeDOS.

Creating the bootable USB key

Once FreeDOS is on your computer, plug the USB key in and find the device to interact with it. You can generally find the device using the dmesg command. This will output something like this:

[1017607.068095] usb 2-1: new high-speed USB device number 110 using ehci-pci
[1017607.278127] usb 2-1: New USB device found, idVendor=1b1c, idProduct=1ab1
[1017607.278135] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[1017607.278140] usb 2-1: Product: Voyager
[1017607.278145] usb 2-1: Manufacturer: Corsair
[1017607.278150] usb 2-1: SerialNumber: AA00000000000634
[1017607.278936] usb-storage 2-1:1.0: USB Mass Storage device detected
[1017607.279084] scsi12 : usb-storage 2-1:1.0
[1017608.389828] scsi 12:0:0:0: Direct-Access Corsair Voyager 1100 PQ: 0 ANSI: 0 CCS
[1017608.390448] sd 12:0:0:0: Attached scsi generic sg2 type 0
[1017608.391272] sd 12:0:0:0: [sdb] 15663104 512-byte logical blocks: (8.01 GB/7.46 GiB)
[1017608.392259] sd 12:0:0:0: [sdb] Write Protect is off
[1017608.392266] sd 12:0:0:0: [sdb] Mode Sense: 43 00 00 00
[1017608.394784] sd 12:0:0:0: [sdb] No Caching mode page found
[1017608.394792] sd 12:0:0:0: [sdb] Assuming drive cache: write through
[1017608.402247]  sdb: sdb1
[1017608.405637] sd 12:0:0:0: [sdb] Attached SCSI removable disk

In this case you want to use the /dev/sdb dive as seen in the log.
Now you can just write the FreeDOS image on the USB disk. The image is compressed so you'll need to decompress it before.

$ bunzip2 FreeDOS-1.1-memstick-2-256M.img.bz2
$ dd if=FreeDOS-1.1-memstick-2-256MB.img of=/dev/sdb bs=512k

Copying the Samsung binaries

Download the Samsung binaries.

Mount the USB key and unzip those binaries at the USB key root. In this way you will be able to use them from FreeDOS later.

# mount /dev/sdb1 /mnt
# unzip Samsung_Performance_Restoration_USB_Bootable.zip
# mv 840Perf/* /mnt
# umount /mnt
# eject /dev/sdb

The fix

Plug the USB key in your machine and reboot the host. Do what is necessary to boot on the USB key. Choose the option 4 of FreeDOS "Load FreeDOS without driver".

Once FreeDOS is running just run the PERF.EXE file and the Samsung tool will start. Enter the index in front of the SSD you want to upgrade and fix. The utility will take care of everything (firmware upgrade and fix). Note that the pass to fix the SSD can take some time.

Once the tool has finished to fix your SSD just reboot the host by typing reboot in FreeDOS. Do not forget to unplug the USB key to avoid booting from it later.

Enjoy your brand new fixed SSD!

(29 January 2015 à 20:41)

25 January 2015

Damien Sandras

Ekiga 5 – Progress Report

Current Status Ekiga 5 has progressed a lot lately. OpenHUB is reportin a High Activity for the project. The main reason behind this is that I am again dedicating much of my spare time to the project. Unfortunately, we are again facing a lack of contributions. Most probably (among others) because the project has been […]

(25 January 2015 à 17:01)

12 November 2014

Baptiste Mille-Mathias

GNOME Trademark and Groupon

If you regularly read the minutes of the GNOME Foundation you noticed since few months there is a dispute about the use of the GNOME trademark by Groupon. The company has released a point of sale tablet under the name GNOME despite the trademark being registered by the GNOME Foundation. So long story short, Groupon […]

The post GNOME Trademark and Groupon appeared first on Nothing Fancy.

(12 November 2014 à 07:22)

31 March 2014

Baptiste Mille-Mathias

Some introduction seems to be necessary

It appears my blog is currently is reaching some places like planet.gnome.org and planet.fedoraproject.org so I think some introduction may be necessary. My name is Baptiste Mille-Mathias, I’m French, I’m living in south of France, near Cannes with my partner Célia and my son Joshua and my daughter Soline. During work days I’m System/Application Administrator […]

The post Some introduction seems to be necessary appeared first on Nothing Fancy.

(31 March 2014 à 10:32)

16 February 2014

Pascal Terjan

OpenLibernet

I saw a link to OpenLibernet and after reading there FAQ I believed there was a fundamental problem. I quickly read the full paper but found no answer.

I guess I have missed something, please explain me :)

A peer address is the hash of a cryptographic public key. It is used to encrypt certain packets as part of the routing protocol, serve as a payment address for the payment system (similar to a Bitcoin’s wallet address), but also serves as a unique identifier for a node, similar to IP Addresses in the current internet.

Also, a node may simply generate a new Peer Address anytime it chooses to.

When the balance of a neighbor hits a certain threshold, a payment request is initiated.

Malicious nodes could however cheat their neighbors and refuse to pay them their due traffic. For that, the protocol is designed to punish such malicious behavior through ostracism. A node will be automatically isolated from the network until it pays all its dues and resolves all conflicts with its neighbors.

Turkish Cat

What is preventing some malicious node to re-join the network with a new peer address when it is getting close to receiving a payment request, and discard its balance?

The only limitation I see is First, and to eliminate the churn caused by unstable nodes, a Layer 2 link becomes active only after it has been alive for a set amount of time. but this is not a problem is you start another client in parallel when getting close to a payment threshold and switch to the new peer address when it is ready.

(16 February 2014 à 17:18)

25 March 2013

Christophe Fergeau

SPICE on OSX, take 2

A while back, I made a Vinagre build for OSX. However, reproducing this build needed lots of manual tweaking, the build was not working on newer OSX versions, and in the mean time, the recommended SPICE client became remote-viewer. In short, this work was obsolete.

I've recently looked again at this, but this time with the goal of documenting the build process, and making the build as easy as possible to reproduce. This is once again based off gtk-osx, with an additional moduleset containing the SPICE modules, and a script to download/install most of what is needed. I've also switched to building remote-viewer instead of vinagre

This time, I've documented all of this work, but all you should have to do to build remote-viewer for OSX is to run a script, copy a configuration file to the right place, and then run a usual jhbuild build. Read the documentation for more detailed information about how to do an OSX build.

I've uploaded a binary built using these instructions, but it's lacking some features (USB redirection comes to mind), and it's slow, etc, etc, so .... patches welcome! ;) Feel free to contact me if you are interested in making OSX builds and need help getting started, have build issues, ...

(25 March 2013 à 09:48)

11 December 2012

Christophe Fergeau

FOSDEM 2013 Crossdesktop devroom Call for talks

The Call for talks for the Crossdesktop devroom at FOSDEM 2013 is getting to its end this Friday. Don't wait and submit your talk proposal about your favourite part of GNOME now!

Proposals should be sent to the crossdesktop devroom mailing list (you don't have to subscribe).

(11 December 2012 à 10:33)

29 November 2012

Daniel Veillard

29 Nov 2012

Wandering in embedded land: part 2, Arduino turned remote control

Now with the Midea AC remote control being mostly deciphered,
the next step is to emulate the remote, with an arduino since it's
the system I use for that embedded greenhouse control. While waiting
for my mail ordered IR LED (I didn't want to solder off one from my
existing AC controllers), I started doing a bit of code and looking
at the integration problems.

The hardware side

One of the challenge is that the Arduino system is already heavy
packed, basically I use all the digital Input/Output except 5 (and 0 and
1 which are hooked to the serial support), and 2 of the 6 analog inputs,
as the card already drives 2 SHT1x temp/humidity sensors, 2 light sensors,
an home made 8 way relay board, and a small LCD display, there isn't much
room left physically or in memory for more wires or code ! Fortunately
driving a LED requires minimal resources, the schematic is trivial:

I actually used a 220 Ohms resistance since I didn't had a 100 Ohms one,
the only effect is how far the signal may be received, really not a problem
in my case. Also I initially hooked it on pin 5 which shouldn't had been
a problem, and that's the free slot I have available on the Arduino

The software side

My thinking was: well I just need to recreate the same set of light
patterns to emulate the remote control and that's done, sounds fairly simple
and I started coding royines which would switch the led on or off for
1T, 3T and 4T durations. Thus the core of the code was like:


void emit_midea_start(void) {
ir_down(T_8);
ir_up(T_8);
}
void emit_midea_end(void) {
ir_down(T_1);
ir_up(T_8);
}
void emit_midea_byte(byte b) {
int i;
byte cur = b;

for (i = 0;i < 8;i++) {
ir_down(T_1);
if (cur & 1)
ir_up(T_3);
else
ir_up(T_1);
cur >>= 1;
}
cur = ~b;
for (i = 0;i < 8;i++) {
ir_down(T_1);
if (cur & 1)
ir_up(T_3);
else
ir_up(T_1);
cur >>= 1;
}
}

where ir_up() and ir_down() were respectively activating or deactivating
the pin 5 set as OUTPUT for the given duration defined as macros.

Playing with 2 arduinos simultaneously

Of course to test my code the simplest was to set up the new module on
another arduino positioned in front of the Arduino with the IR receptor
and running the same code as used for decoding the protocol.

The nice thing is that you can hook up the arduinos on 2 different USB
cables connected to the same machine, they will report as ttyUSB0 and ttyUSB1
and once you have looked at the serial output you can find which is which.
The only cumbersome part is having to select the serial port to the other one
when you want to switch box either to monitor the output or to upload a new
ersion of the code, so far things are rather easy.

Except it just didn't worked !!!

Not the arduino, I actually replaced the IR LED by a normal one from
time to time to verify it was firing for a fraction of a second when
emitting the sequence, no the problem was that the IR receiver was detecting
transitions but none of the expected duration, or order, nothing I could
really consider a mapping of what my code was sending. So I tweaked
the emitting code over and over rewriting the timing routines in 3
different ways, trying to disable interrupts, etc... Nothing worked!

Clearly there was something I hadn't understood ... and I started
searching on google and reading, first about timing issues on the Arduino
but things ought to be correct there, and then on existing remote control
code for Arduino and others. Then I hit
Ken Shirriff's blog on his IR library for the Arduino and realized
that the IR LED and the IR Receiver don't operate at the same level. The
LED really can just be switched on or off, but the IR Receiver is calibrated
for a given frequency (38 KHz in this case) and will not report if it
gets the IR light, but report if it gets the 38 KHz pulse carried by
the IR light. In a nutshell the IR receiver was decoding my analogic 0's
but didn't for the 1's because it was failing to catch a 38 KHz pulse,
I was switching the IR led permanently on and that was not recognized as
a 1 and generating erroneous transitions.

Emitting the 38KHz pulse

Ken Shirriff has another great article titled
Secrets of Arduino PWM explaining the details used to generate a pulse automatically
on *selected* Arduino digital output ans explains the details used to set this
up. This is rather complex and nicely encapsulated in his infrared library
code, but I would suggest to have a look if you're starting advanced
developments on the Arduino.

The simplest is then to use Ken's
IRremote library
by first installing it into the installed arduino environment:

  • create a new directory /usr/share/arduino/libraries/IRremote (as root)
  • copy IRremote.cpp IRremote.hIRremote.h IRremoteInt.h there

and then use it in the midea_ir.ino program:


#include <IRremote.h>

IRsend irsend;

int IRpin = 3;

This includes the library in the resulting program, define an IRsend
object that we will use to drive the IR led. One thing to note is that
by default the IRremote library drives only the digital pin 3, you can
modify it to change to a couple of other pins, but it is not possible to
drive the PWM for digital pin 5 which is the one not used currently on
my greenhouse Arduino.

Then the idea is to just replace the ir_down() and ir_up() in the code
with the equivalent low level entry points driving the LED in the IRsend
object, first by using irsend.enableIROut(38) to enable the pulse at
38 KHz on the default pin (Digital 3) and then use irsend.mark(usec)
for the equivalent ir_down() and irsend.space(usec) for the ir_up():


void emit_midea_start(void) {
irsend.enableIROut(38);
irsend.mark(4200);
irsend.space(4500);
}

void emit_midea_end(void) {
irsend.mark(550);
irsend.space(4500);
}
void emit_midea_byte(byte b) {
int i;
byte cur = b;
byte mask = 0x80;

for (i = 0;i < 8;i++) {
irsend.mark(450);
if (cur & mask)
irsend.space(1700);
else
irsend.space(600);
mask >> 1;
}

...

Checking with a normal led allowed to spot a brief light when emitting
the frame so it was basically looking okay...

And this worked, placing the emitting arduino in front of the receiving
the IRanalyzer started to decode the frames, as with the real remote control,
things were looking good again !

But failed the real test ... when put in from of the AC the hardware didn't
react, some improvement is still needed.

Check your timings, theory vs. practice

I suspected some timing issue, not with the 38KHz pulse as the code from
Ken was working fine for an array of devices, but rather how my code was
emitting, another precious hint was found in the blog about the library:


IR sensors typically cause the mark to be measured as longer than expected and the space to be shorter than expected. The code extends marks by 100us to account for this (the value MARK_EXCESS). You may need to tweak the expected values or tolerances in this case.

remember that the receptor does some logic on the input to detect the
pulse at 38 KHz, that means that while a logic 0 can be detected relatively
quickly, it will take at least a few beats before the sync to the pulse is
recognized and the receiver switch its output to a logic 1. In a nutshell
a 1 T low duration takes less time to recognize than an 1 T high duration.
I was also afraid that the overall time to send a full frame would drift
over the fixed limit needed to transmit it.

So I tweaked the emitting code to count the actual overall duration of
the frames, and aslo added to the receiver decoding code the display of the
duration of the first 10 durations between transitions. I then reran the
receiver looking at the same input from the real remote control and the
arduino emulation, and found that in average:

  • the emulated 8T down was 200us too long
  • the emulated 8T up was 100us too short
  • the emulated 1T down at the beginning of a bit was 100us too long
  • the emulated 1T up at the end of logical 0 was 80us too short
  • the emulated 3T up at the end of logical 1 was 50us too short

After tweaking the duration accordingly in the emitter code, I got
my first successful emulated command to the AC, properly switching it off,
SUCCESS !!!

I then finished the code to provide the weird temperature conversions
front end routines and then glue that as a test application looping over
a minute:

  • switching to cooling to 23C for 15s
  • switching to heating to 26C for 15s
  • switching the AC off for 30s

The midea_ir_v1.ino code
is available for download, analysis and reuse. I would suggest to not
let this run for long in fron of an AC as the very frequent change of mode
may not be good for the hardware (nor for the electricity bill !).

is available for download

Generating the 38KHz pulse in software

While the PWM generation has a number of advantages, especially w.r.t.
regularity of the pattern and no risk of drift due for example to delays
handling interrupts, in my case it has the serious drawback of forcing use
of a given pin (3 by default, or 9 if switching to a different timer in the
IRremote code), and those are not available, unless getting the soldering
iron and changing some of the existing routing in my add-on board. So
the next step is to also implement the 38KHz pulse in software. First
this should only affect the up phase, the down phase consist of no emission
and hence implemented by a simple:


void send_space(int us) {
digitalWrite(IRpin, LOW);
delayMicroseconds(us);
}

The up part should divised into HIGH for most of the duration, followed
by a small LOW indicating the pulse. 38 KHz means a 26.316 microseconds
period. Since the delayMicroseconds() of the arduino indicates it can be
reliable only for more than 3 microseconds, it seems reasonable to use
a 22us HIGH/ 4us LOW split, and expect the remaining computation to fill
the sub-microsecond of the period, that ought to be accurate enough. One
of the point of the code below is to try to avoid excessive drift in two
ways:

  • by doing the accounting over the total lenght for the up period,
    not trying to just stack 21 periods
  • by running a busy loop when the delay left is minimal rather than
    call delayMicroseconds() for a too small amount (not sure it's effective
    micros() value seems periodically updated by an timer interrupt handler
    doesn't look like the chip provides a fine grained counter).

The resulting code doesn't look very nice:


void send_mark(int us) {
unsigned long e, t = micros();
e = t + us;
while (t < e) {
digitalWrite(IRpin, HIGH);
if (t - e < 4) {
while ((t = micros()) < e);
digitalWrite(IRpin, LOW);
break;
}
if (t - e < 22) {
delayMicroseconds(t-e);
digitalWrite(IRpin, LOW);
break;
}
delayMicroseconds(22);
digitalWrite(IRpin, LOW);
t = micros();
if (t - e < 4) {
while ((t = micros()) < e);
break;
}
delayMicroseconds(4);
t = micros();
}
}

But to my surprize once I replaced all the irsend.mark() and irsend.space()
by equivalent calls to send_mark() and send_space(), the IRanalyzer running on
the second arduino properly understood the sequence, proving that the IR
receiver properly picked the signal, yay !

Of course that didn't worked out the first time on the real hardware,
after a bit of analysis of the resulting timings exposed by IRanalyzer
I noticed the mark at the beginning of bits were all nearly 100us too long,
I switched the generation from 450us to 350us, and bingo, that worked with the
real aircon !

the midea_ir_v2.ino resulting module
is very specific code, but it is tiny, less than 200
lines, and he hardware side is also really minimal, a single resistor and
the IR led.

Epilogue

The code is now plugged and working, but v2 just could not work in the
real environment with all the other sensors and communication going on.
I suspect that the amount of foreign interrupts are breaking the 38KHz
pulse generation, switching back to the PWM generated pulse using the
IRremote library works in a very reliable way. So I had to unsolder pin 3
and reaffect it to the IR led, but that was a small price to pay in
comparison of trying to debug the timing issues in situ !

The next step in the embedded work will be to replace the aging NSLU2
driving the arduino by a shiny new Raspberry Pi !

This entry will be kept at http://veillard.com/embedded/midea.html.

(29 November 2012 à 11:14)

26 November 2012

Daniel Veillard

26 Nov 2012

Wandering in embedded land: part 1, Midea 美的 aircon protocol

I have been a user of Arduino's for a few years now, I use them to
control my greenhouse (I grow orchids). This mean collecting data
for various parameters (temperature, hygrometry, light) and actionning
a collection of devices in reaction (fan, misting pump, fogging machine,
a heater). The control part is actually done by an NSLU2 which also
collects the data, export them as graph on the internet and allows me
to manually jump in and take action if needed even if I'm far away
using an ssh connection.

This setup has been working well for me for a few years but since our
move to China I have had an airon installed in the greenhouse like in
other parts of the home. And that's where I have a problem, this AC of
brand Midea (very common home appliance brand in China) can only be
controlled though a remote control. And until now that meant I had no
way to automtate heating or cooling, which is perfectly unreasonnable :-)

After some googling the most useful reference I found about those
is the Tom's
Site page on building a remote adapter
for those. It explained most
parts of the protocol but not all of them, basically he stopped at the
core of the interface but didn't went into details, for example didn't
explained the commands encoding. The 3 things I really need are:

  • Start cooling to a given temperature
  • Start heating to a given temperature
  • Stop the AC

I don't really need full fan speed control, low speed is quite sufficient
for the greenhouse.

Restarting the Arduino development

I hadn't touched the Arduino development environment for the last few
years, and I remember it being a bit painful to set up at the time. With
Fedora 17, things have changed, a simple

yum install arduino

and launching the arduino tool worked the first time, actually it asked me
the permission to tweak groups to allow me as the current user to talk
through the USB serial line to the Arduino. Once done, and logging in again
everything worked perfectly, congratulation to the packagers, well done !
The only sowtware annoyance is that is often take a dozen seconds between the
time an arduino is connnected or powered and when it appears in the
ttyUSB? serial ports options in the UI, but that's probably not arduino's
fault.

The arduino environment didn't really change in all those years,
the two notable exception is the very long list of different boards supoorted
now, and the fact that arduino code files are renamed from .pde to .ino !

Learning about the data emitted

The first thing needed was to double check the result from Tom with
our own hardware, then learn about the protocol to be able to construct
the commands above. To do this I hooked a IR receptor to the Arduino on
digital pin 3, the graphic below show the logic, it's very simple:

Then I loaded a modified (for IRpin 3) version of Walter Anderson's
IRanalyzer.pde
onto the Arduino and started firing the aircon remote control at the
receiver and looked at the result: total garbage ! Whatever the key
pressed the output had no structure and actually looked as random as
input without any key being pressed :-\

It took me a couple of hours of tweaking to find out that the metal
enclosure of the receiver had to be grounded too, the GRD pin wasn't
connected, and not doing so led to random result !

Once that fixed, the data read by the Arduino started to make some
sense and it was looking like the protocol was indeed the same as the
one described in Tom's site.

The key to the understanding of how the remote work is that it
encodes a digital input (3 Bytes for Midea AC protocol) as a set of
0 and 1 patterns each of them being defined by a 0 analogic duration
followed by a short anlogic pulse at 38KHz to encode 0, or a long
analogic pulse at 38KHz to encode 1:

Each T delay correspond to 21 pulses on a 38KHz signal, this is
then a variable lenght encoding

As I was making progresses on the recognition of the patterns sent
by the aircon I was modifying the program to give a more synthetic view
of the resulting received frames, you can use my own
IRanalyzer.ino
it is exended to allow recording of a variable number of transition,
detects the start transition as a 3-4 ms up, and the end as a 3-4 ms
down from the emitter, then show the transmitted data as bit field and
hexadecimal bytes:


Waiting...
Bit stream detected: 102 transitions
D U 1011 0010 0100 1101 1001 1111 0110 0000 1011 0000 0100 1111 dUD Bit stream end : B2 4D 9F 60 B0 4F !
4484 4324 608 1572 604 472 596 1580 600 1580 !
Waiting...

So basically what we find here:

  • the frame start markers 4T down, 4T up
  • 6 Bytes of payload, this is actually 3 bytes of data but after
    each byte is sent its complement is sent too
  • the end of the frame consist of 1T down, 4T up and then 4T down

there is a few interesting things to note about this encoding:

  • It is redundant allowing to detect errors or stray data coming from
    other 38KHz remotes (which are really common !)
  • All frames are actually sent a second time just after the first one,
    so the amount of redundancy is around 4 to 1 in the end !
  • by reemitting inverted values, the anmount of 0 and 1 sent is the
    same, as a result a frame always have a constant duration even if the
    encoding uses variable lenght
  • A double frame duration is around : 2 * (8 + 8 + 3*2*8 + 3*4*8 + 1 + 8) * 21 / 38000 ~= 186 ms

The protocol decoding

Once the frame is being decoded properly, we are down to analyzing
only 3 bytes of input per command. So I started pressing the buttons
in various ways and record the emitted sequences:


Cool 24 fan level 3
1011 0010 0100 1101 0011 1111 1100 0000 0100 0000 1011 1111 B2 4D 3F C0 40 BF
Cool 24 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0100 0000 1011 1111 B2 4D 9F 60 40 BF
Cool 20 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0010 0000 1101 1111 B2 4D 9F 60 20 DF
Cool 19 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0011 0000 1100 1111 B2 4D 9F 60 30 CF
Heat 18 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0001 1100 1110 0011 B2 4D 9F 60 1C E3
Heat 17 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0000 1100 1111 0011 B2 4D 9F 60 0C F3
Heat 29 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 1010 1100 0101 0011 B2 4D 9F 60 AC 53
Heat 30 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 1011 1100 0100 0011 B2 4D 9F 60 BC 43
Stop Heat 30 fan level 1
1011 0010 0100 1101 0111 1011 1000 0100 1110 0000 0001 1111 B2 4D 7B 84 E0 1F
Cool 28 fan 1
1011 0010 0100 1101 1001 1111 0110 0000 1000 0000 0111 1111 B2 4D 9F 60 80 7F
Stop Cool 28 fan 1
1011 0010 0100 1101 0111 1011 1000 0100 1110 0000 0001 1111 B2 4D 7B 84 E0 1F

The immediately obvious information is that the first byte is the constant
0xB2 as noted by Tom's Site. Another thing one can guess is that the command
drom the control is (in general) absolute, not relative to the current state
of the AC, so commands are idempotent,if it failed to catch one key, it will
get a correct state if this is repeated, this just makes sense from an UI
point of view ! After a bit of analysis and further testing the
code for the 3 bytes seems to be:

[1011 0010] [ffff 1111] [ttttcccc]

Where tttt == temperature in Celcius encoded as following:

17: 0000, 18: 0001, 19: 0011, 20: 0010, 21: 0110,
22: 0111, 23: 0101, 24: 0100, 25: 1100, 26: 1101,
27: 1001, 28: 1000, 29: 1010, 30: 1011, off: 1110

I fail to see any logic in the encoding there, I dunno what the Midea
guys were thinking when picking those values. What sucks is that the protocol
seems to have a hardcoded range 17-30, while basically for the orchids
I try to keep in the range 15-35, i.e. I will have to play with the
sensors output to do the detection. Moreover my test is that even when
asked to keep warm at 17, the AC will continue to heat until well above
19C, I can't trust it to be accurate, best is to keep the control and logic
on our side !

cccc == command, 0000 to cool, 1100 to heat, 1000 for automatic selection
and 1101 for the mode to remove moisture

Lastly ffff seems to be the fan control, 1001 for low speed, 0101 for
medium speed, 0011 for high speed, 1011 automatic, and 1110 for off. There
is also a mode which is about minimizing energy, useful at night, where
the fan is quite slower than even the low speed, but i didn't yet understood
how that actually work.

There is still 4 bytes left undeciphered, they could be related to 2
function that I don't use: a timer and oscilation of the air flow, I
didn't try to dig, especially with a remote control and documentation
in Chinese !

Last but not least: the stop command is 0xB2 0x7B 0xE0, it's the same
whatever the current state might be.

At this point I was relatively confident I would be able to control the
AC from an Arduino, using a relatively simple IR LED control, it ought to
be a "simple matter of programming", right ?

Well, that will be the topic of the next part ;-) !

This entry will be kept at http://veillard.com/embedded/midea.html.

(26 November 2012 à 10:58)

19 February 2012

Stéphane Raimbault

Tip for python-mode with Emacs

If you expect 'Alt + d' wil only remove the first part 'foo_' of 'foo_bar' with the great python-mode, you can make this change to python-mode.el:

- (modify-syntax-entry ?\_ "w" py-mode-syntax-table)
+ (modify-syntax-entry ?\_ "_" py-mode-syntax-table)

Thank you Ivan.

Update with python-mode v6.0.4, add this line to python-mode-syntax-table (line 153):

(modify-syntax-entry ?\_ "_" table)

(19 February 2012 à 22:24)

27 January 2012

Gaël Chamoulaud

27 Jan 2012

FOSDEM 2012

My last FOSDEM participation was in 2004, and I always keep in mind many good moments with my French and Belgian GNOME's Friends !

Archives

So I'm totally excited to meet them again in 2012 ... :)

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

(27 January 2012 à 10:28)

15 November 2011

Stéphane Maniaci

Outreach in GNOME

The GNOME Montréal Summit was held a month ago now, and not only was it lots of fun, but also a very productive time. Marina held a session about the outreach in GNOME, and we spent time discussing different ways to improve welcoming and attracting people in GNOME. Let me share some of the points we raised, supplemented by my own personnal opinions, that do not reflect those of my employer, when I’ll have a job.

A warm welcome

There has been a lot of nice work done structuring and cleaning up GNOME Love page. We now have a list of people newcomers can contact if they are interested in a particular project. Feel free to add your name and project to the list, the more entry points we get, the better for them!

I tend to think there is still a bit too much content on the GNOME Love page, maybe we could use more pretty diagrams (platform overview, ways to get involved) to keep the excitement growing and to reduce the amount of text we have right now (GUI tutorials, books about GNOME,  tips & tricks). Feedback appreciated!

Start small

We tend to think of contributions as patches and a certain amount of code added in a project. Howeverit’s not easy at all for newcomers to just pop in and work on a patch, especially in GNOME where most software follows strict rules (as in coding style, GObject API style, etc.). And since GNOME maintains (again, for the most part) a very high quality in its code backed by many hackers, whether they’re part of a company or independent contributors, it makes the landing of a patch even tougher.

Which is why we should encourage everyone who wants to get involved to work on small tasks, would it be fixing a string typo, rewording or marking plural forms for translation. Working on manageable changes ensures that the patches are completed and landing these patches builds confidence to work on bigger patches. Having your name in the commit log is a great reward, that encourages sticking around and digging for more.

Advertise early, advertise often

If we want to get loads of people coming toward GNOME, we should definitely talk more and spread the word about the GNOME Outreach Program for Women (GOPW) and the Google Summer of Code (GSoC) earlier.

Google doesn’t announce the program very far in advance and approved organizations are only published three weeks before the application deadline, but we should encourage students to get involved in GNOME early and keep an eye out for such announcements. Having a list of mentors who can help newcomers anytime throughout the year and having that list included on the Google Summer of Code wiki page of organizations that provide year-round informal mentorship should help attract students to GNOME.

On our side, we could definitely gather ideas and promote the programs earlier. Don’t have exact dates in head, but our KDE fellows promote the Summer of Code early March if not before. Not only will that help better spreading the word, but students might get involved earlier, and get to know the tools/community before the actual program.

Communication is key to success

We have to get better at communicating with interns, and make sure they get the help and feedback they need. We have different channels of communication in GNOME, mainly IRC and mailing lists. Both are a bit intimidating to the newcomer (I still proceed with extreme care when I use them), so it would be good to have a short tutorial about the main mailing lists around, how to connect on IRC and what to expect out of it.

Always two there are, no more, no less

In order to increase the chances of success for the interns, we need good mentors. Most people underestimate what it takes to be a good mentor: being nice, supportive, competent, enthusiastic. You have to remember you’re helping someone to land in the big GNOME land without too much hassle, so consider it carefully. I encourage you to read this very informative blog post if you’re thinking about mentoring a student.

The Summer of Code administrators at GNOME could perhaps keep an eye on mentors as well as students, not with weekly reports but just by poking them from time to time and making sure everything is going well.

Show me the way

To help students set up their workflow, it would be great to have full-length screen-casts demonstrating how to fix a bug in GNOME, starting on the Bugzilla page and finishing on the same page when attaching the final patch. This means going through cloning the module with Git, using grep to find the faulty line, editing the code, using Git to look at the diff and format the final patch. All this in one video would really help connect the parts and suggest a way to work for students.

GNOME Love bug drive

Please consider attaching the gnome-love keyword when you file or triage a bug that is easy to fix. A selection of current GNOME Love bugs is essential to help newcomers figure out how they can start contributing.

Good GNOME Love bugs are trivial or straight forward bugs that everyone agrees on, e.g. paper cut bugs or corner cases. It’s helpful to specify the file or files that will need to be modified and any reference code that does something similar in the bug. Even most trivial bugs are suitable candidates, because in the end, fixing a GNOME Love bug is as much about learning the process, as about the fix itself!

Get involved

If you want to help us gather more people around GNOME and help them find their spot in our community, make sure to suscribe to the outreach-list mailing list.

Thanks for reading!

And thanks to Marina and Karen for reviewing this post!

(15 November 2011 à 23:13)

09 October 2011

Stéphane Raimbault

Feedback on GNOME 3.0

After 5 months with GNOME 3.0, I'm really happy with the experience. At the end of work day,
my mind is no more exhausted of windows placement fighting and application finding.

GNOME 3.0 is really stable, except with the Open Source driver on my Radeon 5870 (4 crashes in 2 months).

I really like the behavior of dual-head where the secondary screen has only one virtual screen.
For me, there are just 3 annoying points:

  • Ctrl + Del to remove a file in Nautilus, may be it's a Fedora settings but this change is just @!# I've already a Trash to undo my mistakes (http://www.khattam.info/howto-enable-delete-key-in-nautilus-3-fedora-15-2011-06-01.html)
  • Alt key to shutdown, no I don't want to waste energy for days and my PC boots quickly.
  • only vertical virtual screens, I found a bit painful to move down two screens when the screen is reachable with one move with a 2x2 layout but I understand this layout doesn't fit well with the GNOME 3 design.

To have a good experience with GNOME 3, I use:

  • Windows key + type to launch everything
  • Ctrl + Shift + Alt + arrows to move the application between the virtual screen
  • Ctrl + click in the launcher when I really want a new instance (the default behavior is perfect)
  • snap à la Windows 7 is great
  • Alt + Tab then the arrow keys to select an app

Don't forget to read https://live.gnome.org/GnomeShell/CheatSheet or the Help (System key + 'Help').

It's not specific to GNOME 3 but you can change the volume when your mouse is over the applet (don't click, think hover) and a mouse scroll.
With GTK+, do you know you can reach the end of scrolled area with a left click on the arrow and a specific position by middle click?

I'm impressed by the new features of GNOME 3.2 and I'm waiting for Fedora 17 to enjoy it!

(09 October 2011 à 21:16)

04 July 2011

Lucas Nussbaum

Going to RMLL (LSM) and Debconf!

Next week, I’ll head to Strasbourg for Rencontres Mondiales du Logiciel Libre 2011. On monday morning, I’ll be giving my Debian Packaging Tutorial for the second time. Let’s hope it goes well and I can recruit some future DDs!

Then, at the end of July, I’ll attend Debconf again. Unfortunately, I won’t be able to participate in Debcamp this year, but I look forward to a full week of talks and exciting discussions. There, I’ll be chairing two sessions about Ruby in Debian and Quality Assurance.

(04 July 2011 à 18:05)

17 February 2011

Vivien Malerba

Recent Libgda evolutions

It’s been a long time since I blogged about Libgda (and for the matter since I blogged at all!). Here is a quick outline on what has been going on regarding Libgda for the past few months:

  • Libgda’s latest version is now 4.2.4
  • many bugs have been corrected and it’s now very stable
  • the documentation is now faily exhaustive and includes a lot of examples
  • a GTK3 branch is maintained, it contains all the modifications to make Libgda work in the GTK3 environment
  • the GdaBrowser and GdaSql tools have had a lot of work and are now both mature and stable
  • using the NSIS tool, I’ve made available a new Windows installer for the GdaBrowser and associated tools, available at http://www.gnome.org/~vivien/GdaBrowserSetup.exe. It’s only available in English and French, please test it and report any error.

In the next months, I’ll work on polishing even more the GdaBrowser tool which I use on a daily basis (and of course correct bugs).

(17 February 2011 à 20:11)

21 March 2010

Ali Sabil

Follow me on Twitter

16 March 2010

Julien Puydt

Webkit fun, maths and an ebook reader

I have been toying with webkit lately, and even managed to do some pretty things with it. As a consequence, I haven’t worked that much on ekiga, but perhaps some of my experiments will turn into something interesting there. I have an experimental branch with a less than fifty lines patch… I’m still trying to find a way to do more with less code : I want to do as little GObject-inheritance as possible!

That little programming was done while studying class field theory, which is pretty nice on the high-level principles and somewhat awful on the more technical aspects. I also read again some old articles on modular forms, but I can’t say that was “studying” : since it was one of the main objects of my Ph.D, that came back pretty smoothly…

I found a few minutes to enter a brick-and-mortar shop and have a look at the ebook readers on display. There was only *one* of them : the sony PRS-600. I was pretty unimpressed : the display was too dark (because it was a touch screen?), but that wasn’t the worse deal breaker. I inserted an SD card where I had put a sample of the type of documents I read : they showed up as a flat list (pain #1), and not all of them (no djvu) (pain #2) and finally, one of them showed up too small… and ended up fully unreadable when I tried to zoom (pain #3). I guess that settles the question I had on whether my next techno-tool would be a netbook or an ebook reader… That probably means I’ll look more seriously into fixing the last bug I reported on evince (internal bookmarks in documents).

(16 March 2010 à 20:20)

24 February 2010

Laurent Richard

Renouveau dans ma vie professionnelle

Bonjour à tous,

je vous délaisse depuis quelques temps. Est-ce le temps qui fait cela, une période dans ma vie ou simplement autre chose, je n'en ai pas la moindre idée.

Je tenais juste à vous annoncer que je vais quitter mon employeur actuel qui est un Agence Gouvernementale pour chercher de l'expérience dans le secteur privé. En effet, je suis de plus en plus déçu par l'Administration.

Depuis quelques années, comme vous le savez, je me passionne pour la sécurité de l'Information. Ceci ajouté à une formation en Management de la Sécurité de l'Information, j'ai l'ambition de faire valoir mes expériences auprès d'un employeur (à définir) qui pourrait me permettre de les améliorer tout en lui faisant bénéficier de mes compétences.

Si vous avez de bonnes adresses, je suis preneur évidemment. ^^

(24 February 2010 à 23:56)

16 January 2010

Vivien Malerba

New Libgda releases

With the beginning of the year comes new releases of Libgda:

  • version 4.0.6 which contains corrections for the stable branch
  • version 4.1.4, a beta version for the upcoming 4.2 version

The 4.1.4′s API is now considered stable and except for minor corrections should not be modified anymore.

This new version also includes a new database adaptator (provider) to connect to databases through a web server (which of course needs to be configured for that purpose) as illustrated by the followin diagram:

WebProvider usage

The database being accessed by the web server can be any type supported by the PEAR::MDB2 module.

The GdaBrowser application now supports defining presentation preferences for each table’s column, which are used when data from a table’s column need to be displayed:
GdaBrowser table column's preferences
The UI extension now supports improved custom layout, described through a simple XML syntax, as shown in the following screenshot of the gdaui-demo-4.0 program:

Form custom layout

For more information, please visit the http://www.gnome-db.org web site.

(16 January 2010 à 18:01)

08 January 2010

Johann Prieur

Attending XMPP Summit and FOSDEM, 5th-8th of February in Brussels

I'm going to FOSDEM, the Free and Open Source Software Developers' European MeetingFor the third year in a row, I’ll be flying to Brussels, Belgium next month to attend the XMPP Summit/FOSDEM combo. I didn’t look through the FOSDEM schedule yet but when it comes to XMPP, I’m looking forward to some discussions on Jingle Nodes and Publish-Subscribe. I’ve been working more and more with XMPP in the past months, especially hacking on ejabberd, and attending is a good motivation to get some of my Jingle Nodes related code shaped up on time. See you there!


(08 January 2010 à 11:55)

30 December 2009

Laurent Richard

Rappel - Définition du Hacker

Le hacker est un passionné d'informatique, souvent très doué, dont les seuls objectifs sont de "bricoler" programmes et matériels (software et hardware) afin d'obtenir des résultats de qualité pour lui-même, pour l'évolution des technologies et pour la reconnaissance de ses pairs.

Les conventions de hackers sont des rassemblements où ces férus d'informatique se rencontrent, discutent et comparent leurs travaux.

Depuis de nombreuses années, la tendance est de confondre à tort le hacker avec le cracker, dont les buts ne sont pas toujours légaux.

Or, on ne le répétera jamais assez, les objectifs du hacker sont louables et contribuent de manière active aux progrès informatiques et aux outils que nous utilisons quotidiennement.

(30 December 2009 à 20:25)

05 November 2009

Julien Puydt

Attracted to FLT

I have been a little stuck for some weeks : a new year started (no, that post hasn’t been stuck since january — scholar year start in september) and I have students to tend to. As I have the habit to say : good students bring work because you have to push them high, and bad students bring work because you have to push them from low! Either way, it has been keeping me pretty busy.

Still, I found the time to read some more maths, but got lost on something quite unrelated to my main objective : I just read about number theory and the ideas behind the proof of Fermat’s Last Theorem (Taylor and Wiles’ theorem now). That was supposed to be my second target! Oh, well, I’ll just try to hit my first target now (Deligne’s proof of the Weil conjectures). And then go back to FLT for a new and deeper reading.

I only played a little with ekiga’s code — mostly removing dead code. Not much : low motivation.

(05 November 2009 à 12:44)

15 October 2009

Johann Prieur

gwt-strophe 0.1.0 released

I just released the first version of gwt-strophe, GWT bindings for the Strophe XMPP library. Nothing much to say else than it is pretty young, with all that can imply. The project is hosted at https://launchpad.net/gwt-strophe


(15 October 2009 à 22:06)

11 July 2009

Lucas Nussbaum

Slides from RMLL (and much more)

So, I’m back from the Rencontres Mondiales du Logiciel Libre, which took place in Nantes this year. It was great to see all those people from the french Free Software community again, and I look forward to seeing them again next year in Bordeaux (too bad the Toulouse bid wasn’t chosen).

The Debian booth, mainly organized by Xavier Oswald and Aurélien Couderc, with help from Raphaël, Roland and others (but not me!), got a lot of visits, and Debian’s popularity is high in the community (probably because RMLL is mostly for über-geeks, and Debian’s market share is still very high in this sub-community).

I spent quite a lot of time with the Ubuntu-FR crew, which I hadn’t met before. They do an awesome work on getting new people to use Linux (providing great docs and support), and do very well (much better than in the past) at giving a good global picture of the Free Software world (Linux != Ubuntu, other projects do exist and play a very large role in Ubuntu’s success, etc). It’s great to see Free Software’s promotion in France being in such good hands. (Full disclosure: I got a free mug (recycled plastic) with my Ubuntu-FR T-shirt, which might affect my judgement).

I gave two talks, on two topics I wanted to talk about for some time. First one was about the interactions between users, distributions and upstream projects, with a focus on Ubuntu’s development model and relationships with Debian and upstream projects. Second one was about voting methods, and Condorcet in particular. If you attended one of those talks, feedback (good or bad) is welcomed (either in comments or by mail). Slides are also available (in french):

On a more general note, I still don’t understand why the “Mondiales” in RMLL’s title isn’t being dropped or replaced by “Francophones“. Seeing the organization congratulate themselves because 30% of the talks were in english was quite funny, since in most cases, the english part of the talk was “Is there someone not understanding french? no? OK, let’s go on in french.“, and all the announcements were made in french only. Seriously, RMLL is a great (probably the best) french-speaking community event. But it’s not FOSDEM: different goals, different people. Instead of trying (and failing) to make it an international event, it would be much better to focus on making it a better french-speaking event, for example by getting more french-speaking developers to come and talk (you see at least 5 times more french-speaking developers in FOSDEM than in RMLL).

I’m now back in Lyon for two days, before leaving to Montreal Linux Symposium, then coming back to Lyon for three days, then Debconf from 23rd to 31st, and then moving to Nancy, where I will start as an assistant professor in september (a permanent (tenured) position).

(11 July 2009 à 09:11)

26 February 2009

Ali Sabil

fatal: protocol error: expected sha/ref

Dear Lennart,

You should probably know that typing the correct URL would work better for cloning a bzr branch (yes a branch, not a repository).

This is what I get when I try to feed git a random invalid URL:

$ git clone git://github.com/idontexist
Initialized empty Git repository in /home/asabil/Desktop/idontexist/.git/
fatal: protocol error: expected sha/ref, got ‘
*********’

No matching repositories found.

*********’

Now is probably the time to stop this non constructive “my DVCS is better than yours”, and focus on writing code and fixing bugs.


(26 February 2009 à 10:40)

19 November 2008

Gaël Chamoulaud

19 Nov 2008

WOW ... Four fucking years without blogging in my advogado's page. I needed times to put my head and my body in the right place. Four years of doubt, sadness and Happiness as well. So since a few days, I decided to blog again.

It's all for the moment :)

(19 November 2008 à 14:13)

22 July 2008

Xavier Claessens

Looking for a job

On September I finish my studies of computer science, so I start to search a job. I really enjoyed my current job at Collabora maintaining Empathy, I learned lots of things about the Free Software world and I would like to keep working on free software related projects if possible. My CV is available online here.

Do you guys know any company around the free software and GNOME looking for new employees? You can contact me by email to xclaesse@gmail.com

(22 July 2008 à 08:29)

22 April 2008

Raphaël Slinckx

Enterprise Social Search slideshow

Enterprise Social Search is a way to search, manage, and share information within a company. Who can help you find relevant information and nothing but relevant information? Your colleagues, of course

Today we are launching at Whatever (the company I work for) a marketing campaign for our upcoming product: Knowledge Plaza. Exciting times ahead!

(22 April 2008 à 12:21)

28 January 2008

Sébastien Bacher

Ubuntu stable updates

There was some blog entries this week about GNOME stable updates on Ubuntu. There is no reason new bug fix versions could not be uploaded to stable out of the fact that the SRU rules require to check carrefully all the changes and doing this job on all the GNOME tarballs is quite some work, or the ubuntu desktop team is quite small and already overworked.

There is a list of packages which have a relaxed rules though, we have discussed adding GNOME to those since the stable serie usually has fixes worth having and not too many unstable changes (though the stable SVN code usually doesn’t get lot of testing) and decided than the stable updates which look reasonable should be uploaded to hardy-update.

There was also some concerns about gnome-games, 2.20.3 has been uploaded to gutsy-proposed today which should reduce the number of bugs sent to the GNOME bugzilla. The new dependencies on ggz has also been reviewed and 2.21 should be built soon in hardy.

(28 January 2008 à 23:12)

14 November 2007

Sébastien Bacher

GNOME and Ubuntu

The FOSSCamp and UDS week has been nice and a good occasion to talk to upstream and people from other distributions. We had desktop discussions about the new technologies landing in GNOME this cycle (the next Ubuntu will be a LTS so we need a balance between new features and stability), the desktop changes we want to do, and how Ubuntu contributes to GNOME.

Some random notes about the Ubuntu upstream contributions:

  • Vincent asked again for an easy way to browse the Ubuntu patches and Scott picked up the task, the result is available there
  • The new Canonical Desktop Team will focus on making the user experience better, most of the changes will likely be upstream material and discussed there, etc
  • Canonical has open Ubuntu Desktop Infrastructure Developer and Ubuntu Conceptual Interface Designer positions, if you want to do desktop work for a cool open source company you might be interested by those ;-)

GNOME updates in gutsy and hardy

  • Selected GNOME 2.20.1 changes have been uploaded to gutsy-updates
  • The GNOME 2.21.2 packaging has started in hardy, some updates and lot of Debian merges are still on the TODO though
  • We have decided to use tags in patches to indicate the corresponding Ubuntu and upstream bugs so it’s easier to get the context of the change, technical details still need to be discussed though

Update: Scott pointed that you can use http://patches.ubuntu.com/n/nautilus/extracted to access to the current nautilus version

(14 November 2007 à 13:09)

03 November 2007

Raphaël Slinckx

git commit / darcs record

I’ve been working wit git lately but I have also missed the darcs user interface. I honestly think the darcs user interface is the best I’ve ever seen, it’s such a joy to record/push/pull (when darcs doesn’t eat your cpu) :)

I looked at git add --interactive because it had hunk-based commit, a pre-requisite for darcs record-style commit, but it has a terrible user interface, so i just copied the concept: running a git diff, filtering hunks, and then outputing the filtered diff through git apply --cached.

It supports binary diffs, file additions and removal. It also asks for new files to be added even if this is not exactly how darcs behave but I always forget to add new files, so I added it. It will probably break on some extreme corner cases I haven’t been confronted to, but I gladly accept any patches :)

Here’s a sample session of git-darcs-record script:

$ git-darcs-record
Add file:  newfile.txt
Shall I add this file? (1/1) [Ynda] : y

Binary file changed: document.pdf

Shall I record this change? (1/7) [Ynda] : y

foobar.txt
@@ -1,3 +1,5 @@
 line1
 line2
+line3
 line4
+line5

Shall I record this change? (2/7) [Ynda] : y

git-darcs-record
@@ -1,17 +1,5 @@
 #!/usr/bin/env python

-# git-darcs-record, emulate "darcs record" interface on top of a git repository
-#
-# Usage:
-# git-darcs-record first asks for any new file (previously
-#    untracked) to be added to the index.
-# git-darcs-record then asks for each hunk to be recorded in
-#    the next commit. File deletion and binary blobs are supported
-# git-darcs-record finally asks for a small commit message and
-#    executes the 'git commit' command with the newly created
-#    changeset in the index
-
-
 # Copyright (C) 2007 Raphaël Slinckx
 #
 # This program is free software; you can redistribute it and/or

Shall I record this change? (3/7) [Ynda] : y

git-darcs-record
@@ -28,6 +16,19 @@
 # along with this program; if not, write to the Free Software
 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.

+# git-darcs-record, emulate "darcs record" interface on top of a git repository
+#
+# Usage:
+# git-darcs-record first asks for any new file (previously
+#    untracked) to be added to the index.
+# git-darcs-record then asks for each hunk to be recorded in
+#    the next commit. File deletion and binary blobs are supported
+# git-darcs-record finally asks for a small commit message and
+#    executes the 'git commit' command with the newly created
+#    changeset in the index
+
+
+
 import re, pprint, sys, os

 BINARY = re.compile("GIT binary patch")

Shall I record this change? (4/7) [Ynda] : n

git-darcs-record
@@ -151,16 +152,6 @@ def read_answer(question, allowed_responses=["Y", "n", "d", "a"]):
        return resp

-def setup_git_dir():
-       global GIT_DIR
-       GIT_DIR = os.getcwd()
-       while not os.path.exists(os.path.join(GIT_DIR, ".git")):
-               GIT_DIR = os.path.dirname(GIT_DIR)
-               if GIT_DIR == "/":
-                       return False
-       os.chdir(GIT_DIR)
-       return True
-
 def git_get_untracked_files():

Shall I record this change? (5/7) [Ynda] : y

# On branch master
# Changes to be committed:
#   (use "git reset HEAD file..." to unstage)
#
#       modified:   document.pdf
#       modified:   foobar.txt
#       modified:   git-darcs-record
#       new file:   newfile.txt
#
# Changed but not updated:
#   (use "git add file file..." to update what will be committed)
#
#       modified:   git-darcs-record
#
What is the patch name? Some cute patch name
Created commit a08f34e: Some cute patch name
 4 files changed, 3 insertions(+), 29 deletions(-)
 create mode 100644 newfile.txt

Get the script here: git-darcs-record script and put in somewhere in your $PATH. Any comments or improvements is welcome !

(03 November 2007 à 16:43)

22 January 2007

Xavier Claessens

Un nouveau laptop, sans windows !

Voilà, j’y pensais depuis longtemps et c’est maintenant chose faite, je me suis acheté un tout nouveau ordinateur portable.

Je l’ai acheté sur le site français LDLC.com et me suis renseigné pour savoir si il était possible d’acheter les ordinateurs de leur catalogue sans logiciels (principalement sans windows). Je leur ai donc envoyé un email, et à ma grande surprise ils m’on répondu que c’était tout a fait possible, qu’il suffi de passer commande et d’envoyer ensuite un email pour demander de supprimer les logiciels de la commande. J’ai donc commandé mon laptop et ils m’ont remboursé de 20€ pour les logiciels, ce n’est pas énorme sur le prix d’un portable, mais symboliquement c’est déjà ça.

Toutes fois je me pose des questions, pourquoi cette offre n’est pas inscrite sur le site de LDLC ? En regardant sous mon tout nouveau portable je remarque une chose étrange, les restes d’un autocollant qu’on a enlevé, exactement à l’endroit où habituellement est collé la clef d’activation de winXP. Le remboursement de 20€ tout rond par LDLC me semble également étrange vue que LDLC n’est qu’un intermédiaire, pas un constructeur, et donc eux achètent les ordinateurs avec windows déjà installé. Bref tout ceci me pousse à croire que c’est LDLC qui perd les 20€ et je me demande dans quel but ?!? Pour faire plaisir aux clients libre-istes ? Pour éviter les procès pour vente liée ? Pour à leur tours se faire rembourser les licences que les clients n’ont pas voulu auprès du constructeur/Microsoft et éventuellement gagner plus que 20€ si les licences OEM valent plus que ça ? Bref ceci restera sans doutes toujours un mistère.

J’ai donc installé Ubuntu qui tourne plutôt bien. J’ai été même très impressionné par le network-manager qui me connecte automatiquement sur les réseaux wifi ou filaire selon la disponibilité et qui configure même un réseau zeroconf si il ne trouve pas de server dhcp, c’est très pratique pour transférer des données entre 2 ordinateurs, il suffi de brancher un cable ethernet (ça marche aussi par wifi mais j’ai pas encore testé) entre les 2 et hop tout le réseau est configuré automatiquement sans rien toucher, vraiment magique ! Windows peut aller se cacher, ubuntu est largement plus facile d’utilisation !

(22 January 2007 à 04:12)

20 December 2006

Joachim Noreiko

Documenting bugs

I hate having to write about bugs in the documentation. It feels like waving a big flag that says ‘Ok, we suck a bit’.

Today, it’s the way fonts are installed, or rather, they aren’t. The Fonts folder doesn’t show the new font, and the applications that are already running don’t see them.

So I’ve fixed the bug that was filed against the documentation. Now it’s up to someone else to fix the bugs in Gnome.

(20 December 2006 à 05:41)

05 December 2006

Joachim Noreiko

Choice and flexibility: bad for docs

Eye of Gnome comes with some nifty features like support for EXIF data in jpegs. But this depends on a library that isn’t a part of Gnome.

So what do I write in the user manual for EOG?

‘You can see EXIF data for an image, but you need to check the innards of your system first.’
‘You can maybe see EXIF data. I don’t know. Ask your distro.’
‘If you can’t see EXIF data, install the libexif library. I’m sorry, I can’t tell you how you can do that as I don’t know what sort of system you’re running Gnome on.’

The way GNU/Linux systems are put together is perhaps great for people who want unlimited ability to customize and choose. But it makes it very hard to write good documentation. In this sort of scenario, I would say it makes it impossible, and we’re left with a user manual that looks bad.

I’ve added this to the list of use cases for Project Mallard, but I don’t think it’ll be an easy one to solve.

(05 December 2006 à 08:08)

Sources

Planète GNOME-FR

Planète GNOME-FR est un aperçu de la vie, du travail et plus généralement du monde des membres de la communauté GNOME-FR.

Certains billets sont rédigés en anglais car nous collaborons avec des gens du monde entier.

Dernière mise à jour :
28 April 2017 à 17:30 UTC
Toutes les heures sont UTC.

Colophon

Planète GNOME-FR est propulsée par l'agrégateur Planet, cron, Python, Red Hat (qui héberge ce serveur).

Le design du site est basé sur celui des sites GNOME et de Planet GNOME.

Planète GNOME-FR est maintenue par Frédéric Péters et Luis Menina. Si vous souhaitez ajouter votre blog à cette planète, il vous suffit d'ouvrir un bug. N'hésitez pas à nous contacter par courriel pour toute autre question.