<![CDATA[Olivier Dolbeau]]> 2017-03-20T00:05:16+01:00 https://odolbeau.fr/ Sculpin <![CDATA[How to install your laptop with ansible?]]> 2016-08-30T00:00:00+02:00 https://odolbeau.fr/blog/how-to-install-your-laptop-with-ansible.html Why Ansible?

As you may know, I'm pretty familiar with chef and I use it almost every day, for both professional & personal stuff. Despite that, I am quite willing to try something else and Ansible is a well known (and used!) configuration management tool. I know a lot of people who are quite pleased to use it!

Furthermore, I changed my laptop last week so it was the perfect occasion to give it a try :).

Let the journey begin!

As you will see, Ansible is really easy to use.

What's the goal?

As I said, my goal is to automatically install a laptop development. I use debian and I will only focus on it. :)

I would like:

  • 2 steps maximum (bootstrap + run)
  • as less manual actions as possible
  • an easy to understand / maintain project

Implementation

First of all, let's start with the project tree:

.
├── bin
│   └── bootstrap
├── laptop.yml
├── Makefile
├── README.md
└── roles
    └── common
        ├── files
        │   └── ssh
        │       ├── config
        │       ├── id_rsa
        │       └── id_rsa.pub
        └── tasks
            ├── main.yml
            └── nginx.yml

There aren't a lot of files which is a good point for maintainability, right? :)

Furthermore, the installation process is very simple:

  1. Clone the repository
  2. Boostrap the laptop with the bin/boostrap command
  3. Install the laptop with make install

As you can see, I only have 2 commands to run: it seems one of my goal is already reached! \o/

Let's explain those 2 steps.

Bootstrap

As ansible isn't installed by default on your laptop, the goal of the bootstrap is to install it. Furthermore, I don't want to deal with the ansible command line because there are several arguments to include and I'm used too make install everything; that's why I need make too. Finally, as ansible will be run by my user and not by root, I need to have some privileges, that's why I also install sudo and grant all privileges to the current user.

Here is the boostrap script:

#!/bin/bash

echo "Installing sudo, make & ansible, and allow user \"${USER}\" to run any command with sudo..."

LOCAL_USER=${USER} su -c 'apt-get install sudo make ansible && echo "${LOCAL_USER}      ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers.d/${LOCAL_USER}'

Once all prerequisites are installed, we can use ansible.

Installing the laptop

As I said, I use a Makefile. It contains only one command:

.PHONY: ${TARGETS}

install:
        ansible-playbook -i '127.0.0.1,' laptop.yml --ask-vault-pass

We simply ask ansible to run the playbook named laptop.yml on 127.0.0.1. Forget the --ask-vault-pass option for now, we'll discuss it later! ;)

Playbook

As said before, we ask ansible to run a playbook. In our case, it's called laptop.yml and here is the content of this file:

---

- hosts: 127.0.0.1
  connection: local
  roles:
    - common

The only impacted host is 127.0.0.1. We use the local connection (you can use ssh to configure a remote server for instance). Then we list all roles which concern our host.

It's a very simple playbook and I won't go into details on this subject for two reasons:

  • you don't need anything else to configure a personal laptop
  • that's a huge subject I'm definitively not the best specialist of to talk about it :)

If you're interested in anyway, you can take a look at the official documentation.

Roles & Tasks

Our playbook mention only 1 role: common.

Let's have a look at it:

roles/common/
├── files
│   └── ssh
│       ├── config
│       ├── id_rsa
│       └── id_rsa.pub
└── tasks
    ├── main.yml
    └── nginx.yml

It contains several files related to ssh and two tasks respectively called main.yml and nginx.yml.

You will always have a main.yml task in a role as it's the default entry point. Here is an extract of this file:


---

- name: install packages
  become: true
  apt: name="{{item}}" state=present
  with_items:
    - ack-grep
    - composer
    - curl
    - make
    # ...

- name: Install slack
  apt:
    deb: https://downloads.slack-edge.com/linux_releases/slack-desktop-2.1.0-amd64.deb
    state: present
  become: true

- name: Install ssh keys
  copy:
    src: "ssh/{{ item }}"
    mode: "0644"
    dest: /home/odolbeau/.ssh/
  with_items:
    - id_rsa
    - id_rsa.pub

- name: Install ssh config
  copy:
    src: "ssh/config"
    mode: "0644"
    dest: /home/odolbeau/.ssh/

- name: Download dot files from github
  git: repo=ssh://git@github.com/odolbeau/dot-files.git dest=/home/odolbeau/dot-files

- name: Install dot files
  command: make -C /home/odolbeau/dot-files install

- name: Download VIM configuration from github
  git: repo=ssh://git@github.com/odolbeau/vim-config.git dest=/home/odolbeau/vim-config

- name: Install VIM configuration
  command: make -C /home/odolbeau/vim-config install

- include: nginx.yml

There are several instructions in this file. As you may have noticed, everything is in yaml and clearly understandable.

Let's explain some of this instructions:


- name: install packages
  become: true
  apt: name="{{ item }}" state=present
  with_items:
    - ack-grep
    - composer
    - curl
    - make
    # ...

Most of the ansible instructions speak for themselves!

In this case, we create a task which will use the apt module to install a package. This task will be run with several items listed under with_items key.

The become: true option is used to run this task as root (cause the default value for become_user is root).


- name: Install slack
  apt:
    deb: https://downloads.slack-edge.com/linux_releases/slack-desktop-2.1.0-amd64.deb
    state: present
  become: true

In this case, we still use the apt module to install a remote package. Notice that you can use an inline syntax like in the first example with apt: deb="..." or the extended syntax like here.


- name: Install ssh config
  copy:
    src: "ssh/config"
    mode: "0644"
    dest: /home/odolbeau/.ssh/

Again, a very easy to understand task! I simply want to copy files coming from my roles (placed under my_role/files/) on my laptop. Easy! \o/


- name: Download dot files from github
  git: repo=ssh://git@github.com/odolbeau/dot-files.git dest=/home/odolbeau/dot-files

- name: Install dot files
  command: make -C /home/odolbeau/dot-files install

Those 2 tasks are used to install my dot-files. The first one uses git to download the repository and the second executes a make install inside the correct folder.

I won't list all modules I use though. There are plenty of them and their documentation is very clear! Don't forget to have a look at existing modules before running a command by yourself. :)

That's it!

You know everything you need to start to use ansible by yourself for a single host!

Bonus

Ask the user to do something for you

Let's confess: sometimes, it's hard / painful / time-consuming / impossible to do everything with a configuration management.

For instance, in my case, I need to install a VPN client and to create a tunnel in order to download some private projects.

Once the VPN is installed, here is what I use:


- command: ping -c 1 "a.private.url"
  register: vpn_connected
  ignore_errors: True

- pause:
    prompt: "Make sure to run the VPN in order to continue the installation. [Press any key once done]"
  when: vpn_connected|failed

I try to ping a private URL. I register the result of this command inside the vpn_connected var.

Then I use the pause module. If the tunnel isn't running, I simply ask the user to launch it, otherwise, it keeps going!

Of course, the goal is not to use this trick every time: if your users have to do everything manually, you're not using a configuration management tool correctly! Use this only when you really can't configure something automatically.

Deal with sensitive data

As previously explained, I use ansible to install my private ssh keys. Even if all my private keys are protected by a passphrase, I don't want to version them without encryption!

In my case, I use a private repository to store my ansible configuration. In this situation, it's not really necessary to encrypt your keys but as you will see, it's very easy to do! :)

Of course encrypt keys / passwords / files is a common use case and Ansible propose a very powerful solution to deal with it: Vault.

It's shipped with the ansible package and it's easy to use! I mean, really easy!

If you want to encrypt your ssh keys:

# Copy files into your role
cp ~/.ssh/id_rsa ~/.ssh/id_rsa.pub my_role/files/
# Encrypt them!
ansible-vault crypt my_role/files/id_rsa my_role/files/id_rsa.pub
New Vault password:
Confirm New Vault password:
Encryption successful

And that's it! Your files are now encrypted —congrats by the way! :)

As soon as you have encrypted file in a role, you'll have to add the --ask-vault-pass option when running the ansible-playbook command.

Conclusion

I hope I convinced you: it's pretty convenient to write an ansible playbook to install your laptop! In case you change it, you will gain a lot of time running 2 commands instead of installing everything manually! :)

Ansible is easy to use which make it a good choice to fulfill this need, but there are a lot of other configuration management tools out there, so don't hesitate to take a look at them! :)

]]>
<![CDATA[Which tool to use to make marketing dashboard?]]> 2014-11-09T00:00:00+01:00 https://odolbeau.fr/blog/which-tool-to-use-to-make-marketing-dashboards.html The history

For those who don't know it, I made a presentation of the ELK stack at the PHP Forum in Paris (in French). During this talk, I mainly talked about our (BlaBlaCar) technical logs. I also mentioned that we use ELK to make some marketing dashboards too (signups by country, payments and so on.) even if I said thaht I don't consider ELK as the best tool to do this.

Last week Claude Duvergier (@C_Duv) asked me on twitter which tool I would recommend instead of ELK. This make me think that in fact, ELK is not a bad choice, even if some others tools exist.

Disclaimer: I'm definitely not aware of all existing tools to make marketing dashboards. I will talk only about solutions I already used. Sorry if I miss something important. This article is more my personal feedback regarding some tools we use than a real comparatif.

Tools

ELK

Let me start by the one I probably know the most. ELK is very easy to setup and to use. You just have to send all important events to logstash or anything else (in our case, we send it to a RabbitMQ broker) and store them in ElasticSearch. Kibana will then help you to display all this events.

By design, all elasticsearch queries are made on the fly. ElasticSearch is "just" a data store and don't aggregate anything. You're able to update your queries when you want, filter results, etc directly in Kibana which is really appreciable. The drawback is the time needed to render some graphs for a long period.

Another point to consider with ELK is the storage needed for all your data. It can be quite huge depending on what you choose to store and on your retention policy.

Pros:

  • Very easy to setup and to install
  • Kibana is very easy to use (I didn't try the latest version with aggregations support but I'm sure it's even better)

Cons:

  • Rendering time depends on the interval size
  • Storage space needed (sometimes)

NewRelic

For those who don't know it, NewRelic is an Application Performance Management (APM). I won't talk a lot about it because even if you can make some graphs based on pages frequentation, it's definitely not the solution to our problem.

I choose to put it in this list only because we use some New Relic dashboards for technical needs (to check registrations & payments after a deployment for example) but it can looks like a marketing dashboard.

Pros:

  • Easy to make some basic dashboards (once you already installed it)

Cons:

  • Non free
  • Very basic
  • Can only track pages frequentation
  • Not possible to change time aggregation interval

InfluxDB

Recently introduced at BlaBlaCar, I am fallen in love with this time series database. For now, we use it for code instrumentation purpose and we only have technical dashboards (and a bunch of issues with collected data. :P). The latest stable release of InfluxDB is the 0.8.5 and the cluster support is experimental. Despite all of this, it works well. :) It's not the goal here so I will not talk about the design of InfluxDB. You are strongly enouraged to take a look at it.

Like ELK, you need to send events with all relevant informations you need. This time however, you shouldn't query the created serie. You have to create some continuous queries in order to split your data.

For example in our case, we send an event for each http queries. This event contains the called route and we have a continuous query which create new series per route and calculate the average response time and memory usage with an aggregation of 1 minute. Another continuous query make the same, every hour.

To create dashboards with data from InfluxDB, we use grafana, a kibana clone which supports InfluxDB, Graphite and OpenTSDB. With this system, we always query the same series. It's really fast, even if you display a lot of series on a large period.

Pros:

  • Really fast to display
  • Can store all kinds of events (even system events for example).
  • Grafana is as simple as Kibana (maybe more)

Cons:

  • Harder to setup than an ELK stack
  • Need to define all continuous queries to be able to use them after
  • Cluster support not ready yet

Conclusion

I know that there is a lot of others solutions available to create marketing dashboards. I hesitated to talk about Hadoop / Vertica / Tableau which are also used at BlaBlaCar for data analysis. However as I don't personally use these tools, they doesn't fill my initial requirements.

Despite what I told during my talk at ForumPHP Paris, ELK IS a very good choice even more if you already use this stack to analyze your logs and if you don't need to display a long period for your marketing dashboard. If you have more time and want to use a time series database, I recommend you InfluxDB even if it's a really young software. :)

If you use another tool and think it's THE perfect solution, let me know. :)

]]>
<![CDATA[[Benchmark] PHP amqp-lib VS amqp-ext with Swarrot]]> 2014-10-30T00:00:00+01:00 https://odolbeau.fr/blog/benchmark-php-amqp-lib-amqp-extension-swarrot.html The history

Everything start on 7 April 2014. We gave a talk with Grégoire Pineau (aka @lyrixx) called "Making asynchronous tasks in PHP" (slides are available on speakerdeck).

During this talk we trolled about the php-amqplib created by Alavaro Videla and maintained by a lot of users.

This library is one of the 2 main ways to talk to an amqp broker. The other one is the php extension.

Anyway, the subject has come back a few days ago on twitter (conversation is in French) when Ölbaum asked us if we still recommend to not use the php-amqplib. Some tweets later, I proposed to make a small benchmark to compare these 2 ways of talking to a broker. Because trolling is good but sometimes, having some real arguments is better.

Environment

  • The broker used is RabbitMQ (v3.3.5).
  • I launched tests on a Mac Book Pro (2,8 GHz Intel Core i7, 16 GB 1600 MHz DDR3 with OSX 10.10).
  • I used PHP 5.6.2.
  • I used the last stable version for both the extension (1.4.0) and the library (2.4.1).

I chose to use Swarrot to write as few code as possible for each implementation.

The full project used for this benchmark can be found on github. Of course, feel free to contribute and complete it!

Results

And now, the results. Every tests have been launched 3 times. What you see here is the average time of these 3 launches.

Publish 1 million messages in a direct exchange

Code is here The concerned queue has been purged before each launch.

./bench publish [ext|lib] -m 1000000

With the extension

+-------------+-------------+
| Duration    |  36 seconds |
+-------------+-------------+
| Memory peak | 1.5 MiB     |
+-------------+-------------+

With the library

+-------------+-------------+
| Duration    |  46 seconds |
+-------------+-------------+
| Memory peak | 2.5 MiB     |
+-------------+-------------+

So what?

The duration difference is pretty small between the extension and the library. In both case the memory consumption is very stable (I tried with 100, 1k, 10k, 100k messages, the memory consumption is near the same).

Get 100k messages from a queue (+ ack)

Code is here.

./bench get [ext|lib] -m 1000000

With the extension

+-------------+-------------+
| Duration    |  19 seconds |
+-------------+-------------+
| Memory peak | 1.8 MiB     |
+-------------+-------------+

With the library

+-------------+-------------+
| Duration    |  43 seconds |
+-------------+-------------+
| Memory peak | 2.8 MiB     |
+-------------+-------------+

So what?

For the memory consumption, again, nothing to say. It's low and stable in both case. To consume messages, the pecl extension is more than 2 times faster than the library.

Conclusion

The extension is faster than the library. C is faster than PHP. Is it really surprising? No! It's not really pertinent to compare 2 tools which obviously make the same job but have different implementations.

The main difference is the installation. The library is VERY simple to install! You just have to add "videlalvaro/php-amqplib": "~2.4" in your composer.json and you're done. On the contrary, for the extension, you generally need to compile the rabbitmq-c (an AMQP client in C used by the php extension) which can be a bit boring.

So, if you already install the extension and if it's not a problem for you, go for it! Otherwise, don't panic and use the library!

In both case, take a look at Swarrot (and the SwarrotBundle) to be able to change your choice if needed.

]]>
<![CDATA[Speed up your cookbooks tests with docker]]> 2014-09-28T00:00:00+02:00 https://odolbeau.fr/blog/speed-up-cookbooks-tests-docker.html If you're not familiar with chef, a configuration management tool (like Puppet or Ansible for example) you should probably click here and learn how to use it. :)

If you're not familiar with docker it's not a problem! You just need to install it and to continue reading. :)

For those who already use chef, I'm sur you write a lot of tests to check your cookbooks isn't it? :) And you know that it can take a (very) long time to run the full test suite on different VM.

But don't worry, from now on, it's over! Look at kitchen-docker.

Just add kitchen-docker in your Gemfile (or install it directly with gem install kitchen-docker) and you can now start to use docker instead of Vagrant. \o/ Just replace you're current driver in .kitchen.yml:

driver:
  name: docker

If you use docker on a Mac (with boot2docker) or inside another machine, you also need to change the socket used by the docker daemon :

platforms:
- name: ubuntu-12.04
  driver_config:
    socket: tcp://docker.example.com:4242

And that's all, you can now launch kitchen test and see the result. :)

On some cookbooks used at BlaBlaCar, running the full test suite is near 60% quicker than before.

Of course, it's not the perfect solution and there is some drawbacks. For example : * the cron service is not automatically launched * on my local environment (mac), when chef change the DNS used in your docker, some tests fails. * some of our tests need a VM with more than 1 disk. Not possible with docker.

Apart from this problems, we save time everyday. \o/

]]>
<![CDATA[When Monolog meet ELK]]> 2014-07-18T00:00:00+02:00 https://odolbeau.fr/blog/when-monolog-meet-elk.html For this first article since 2 years (I know, it was long, did you miss me ? :D) I'm going to talk about Monolog, Gelf and ELK. It's just a quick introduction but you will find a lot of resources in this article.

Monolog

I'm sure you already know Monolog, the (almost) perfect logging library for PHP. :)

I strongly suggest you to read the core concepts of Monolog if you're not familiar with channels, handlers and processors. In 2 words, channel is the name of the logger, handlers are its outputs and processors are here to add extra information in your logs.

As we will see at the end of this article, you can make really interesting filters with your channels and the extra data added by your processors.

Gelf

Gelf means Graylog Extended Log Format. This new format has been created by Graylog to avoid all syslog inconvenients like the length limit and the lack of data types and compression.

Gelf messages can be sent by UDP (fortunately !) and of course, the awesome Monolog provides a GelfHandler.

Here is an example of a gelf message (can be found in the specs) :

{
  "version": "1.1",
  "host": "example.org",
  "short_message": "A short message that helps you identify what is going on",
  "full_message": "Backtrace here\n\nmore stuff",
  "timestamp": 1385053862.3072,
  "level": 1,
  "_user_id": 9001,
  "_some_info": "foo",
  "_some_env_var": "bar"
}

As described in the specs, some information are mandatory (but don't worry about it, just let Monolog deal with it) and you can add as many information as you wish.

Here is a small example of a custom handler to log gelf messages in logstash:

#config_dev.yml
monolog:
    ...
    handlers:
        my_logstash_handler:
            type: gelf
            publisher:
                hostname: %logstash_host%
                port: %logstash_port%
            formatter: monolog.formatter.gelf_message
            level: INFO

ELK

ELK is an acronym for ElasticSearch / Logstash / Kibana.

ElasticSearch

ElasticSearch is a very powerful distributed search engine which provides a RESTful API. In the ELK stack, ElasticSearch is the storage backend. All our logs will be stored insite an index.

Logstash

Logstash has been created to manage logs. It collects, parses and stores them. There is a lot of existing inputs (41), filters (50) and outputs (55). For example, look at this configuration file :

input {
    gelf {
        codec => "json"
    }
}

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
    }
}

We have configured a single input which is gelf. As we saw it, by default, gelf logs are sent through UDP on port 12201 and of course, logstash knows it.

There is no filter in this configuraton as we don't really need it for this example. By the way, the gelf message will be directly sent to ElasticSearch.

And finally, there is an elasticsearch_http output. So, logstash will call the ElasticSearch API to insert logs into an index, generated per day.

You can take a look at the full documentation for the gelf input and the elasticsearch_http output to have more information.

Kibana

Kibana is a very powerful tool to see and interact with your data. It's very easy to use and you can create a lot of dashboards to visualize all your logs. Take a look at the project homepage to see some examples.

Tips

Create dashboards for everything !

Is there a particular error in your production environment ? Create a dashboard just for it ! You will have all available information in a single place, it will be easier to aggregate information and understand when the error occurred.

Context is your friend bro !

Of course, you all know PSR-3, which defined a standard PSR\Log\LoggerInterface (used by Monolog obviously). But did you read the "Context" section ? Did you notice that all methods defined in the interface take a $context array as second argument ? Do you use it ? No ? You should !

This context is the best way to provide more information with your log. You can easily add all needed information to know WHEN an error (for example) occurred. And once you send all this context to the ELK stack, you can easily filter your logs according to the context. Does this error occur every time with the same user ? Is it only with this particular entity ? Anyway, just add context and you will be able to group your logs according to it. :)

]]>
<![CDATA[Symfony2 cache warmer]]> 2012-07-15T00:00:00+02:00 https://odolbeau.fr/blog/symfony2-cache-warmer.html For this first post since 6 months, let me introduce to you the CacheWarmer class.

Cache warmer? What is this?

A cache warmer is just a class that writes in a file to store data. Really simple.

Look into you're app/cache directory. Here is an example of what you can find:

  • Global configuration
  • Translations
  • Assetic configuration
  • Doctrine proxies classes
  • Twig compiled templates
  • ...

Lot of things, created by cache warmers.

How to use cache warmer?

Juste create a file that extend the CacheWarmerInterface provided by the HttpKernel Component.

<?php

use Symfony\Component\HttpKernel\CacheWarmer\CacheWarmerInterface;

class MyCustomeCacheWarmer implements CacheWarmerInterface
{
    public function warmUp($cacheDir)
    {
        // Create a file here
        // Write in it
    }

    public function isOptional()
    {
        // By default, all CacheWarmer are called by the `app/console cache:warmup` command.
        // But you can specify the `--no-optional-warmers` to skip non optional ones.
        return false;
    }
}

For the implementation, you can take a look at the TemplatePathsCacheWarmer for example.

Of course, you have to declare your CacheWarmer as a service. Do not forget to tag it with kernel.cache_warmer:

<service id="my_custom.cache_warmer" class="path/to/MyCustomCacheWarmer">
    <tag name="kernel.cache_warmer" />
</service>

Now, if you run the command app/console cache:warmup (or app/console cache:clear without the --no-warmup option) your cache file should be created. (Search it in the app/cache/ folder).

To use your cached data, it's very simple. Look at TemplateLocator.

The TemplateLocator simply require the cached file and store the result into a var. To do the same thing, simply store a return statement in your file. For example, the app/cache/dev/templates.php file looks like this:

<?php return array (
    'template_name' => 'path/to/template',
    // ...
),

You're done. You can now use your custom CacheWarmer in your application. :)

]]>
<![CDATA[Use virtuals forms with Symfony2]]> 2012-01-23T00:00:00+01:00 https://odolbeau.fr/blog/use-virtuals-forms-with-symfony2.html This article is now an official cookbook since February 2012.

Wait! What? You already write this before! Oo

Yes I do! But in french! And it seems it was not very clear... So let me explain virtuals forms again and this time... in english!

We have 2 entities. A Company and a Customer :

<?php

namespace ...;

class Company
{
    private $name;
    private $website;

    private $address;
    private $zipcode;
    private $city;
    private $country;

    // Some nice getters / setters here.
}
<?php

namespace ...;

class Customer
{
    private $firstName;
    private $lastName;

    private $address;
    private $zipcode;
    private $city;
    private $country;

    // Some nice getters / setters here.
}

Like you can see, both of our entities have these fields: address, zipcode, city, country.

Now, we want to build 2 forms. One for create/update a Company and the second to create/update a Customer.

Of course, we have only two entities which have to contains some location informations... for now! Maybe later, some entities will have this fields. So, we have to find a solution to not duplicate our code!

First, we create very simple CompanyType and CustomerType:

<?php

namespace ...;

use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilder;

class CompanyType extends AbstractType
{
    public function buildForm(FormBuilder $builder, array $options)
    {
        $builder
            ->add('name', 'text')
            ->add('website', 'text')
        ;
    }

    public function getDefaultOptions(array $options)
    {
        return array(
            'data_class' => '...\Company',
        );
    }

    public function getName()
    {
        return 'company';
    }
}
<?php

namespace ...;

use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilder;

class CustomerType extends AbstractType
{
    public function buildForm(FormBuilder $builder, array $options)
    {
        $builder
            ->add('firstName', 'text')
            ->add('lastName', 'text')
        ;
    }

    public function getDefaultOptions(array $options)
    {
        return array(
            'data_class' => '...\Customer',
        );
    }

    public function getName()
    {
        return 'customer';
    }
}

Definitely nothing complicated here.

Now, we have to deal with our four duplicated fields... Here is a (simple) location FormType:

<?php

namespace ...;

use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilder;

class LocationType extends AbstractType
{
    public function buildForm(FormBuilder $builder, array $options)
    {
        $builder
            ->add('address', 'textarea')
            ->add('zipcode', 'string')
            ->add('city', 'string')
            ->add('country', 'text')
        ;
    }

    public function getDefaultOptions(array $options)
    {
        return array(
        );
    }

    public function getName()
    {
        return 'location';
    }
}

We can't specify a data_class option in this FormType because, we don't have a Location Entity.
We don't have a location field in our entity so we can't directly link our LocationType.
Of course, we absolutely want to have a dedicated FormType to deal with location (remember, DRY!)

There is a solution!

We can set the option 'virtual' => true in the getDefaultOptions method of our LocationType and directly start use it in our 2 first types.

Look at the result:

<?php
// CompanyType

public function buildForm(FormBuilder $builder, array $options)
{
    $builder->add('foo', new LocationType());
}
<?php
// CustomerType

public function buildForm(FormBuilder $builder, array $options)
{
    $builder->add('bar', new LocationType());
}

With the virtual option set to false (default behavior), the Form Component expect a Foo (or Bar) object or array which contains our four location fields. Of course, we don't have this object/array in our entities and we don't want it!

With the virtual option set to true, the Form Component skip our Foo (or Bar) object or array. So, it directly access to our 4 location fields which are in the parent entity!

(One more time, thank to Alexandre Salomé for the tips)

]]>
<![CDATA[Utiliser les forms virtuals avec Symfony2]]> 2012-01-09T00:00:00+01:00 https://odolbeau.fr/blog/utiliser-les-forms-virtuals-avec-symfony2.html 2012 commence tout juste (à ce propos, bonne année!) et commence plutôt bien!

J'ai été amené, il y a quelques jours, à utiliser l'attribut virtual du composant Form de symfony2. Le besoin était on ne peut plus simple: Créer un FormType adapté à nos besoins pour afficher un formulaire d'adresse. Ce dernier devant bien entendu être utilisé au sein de plusieurs entités disposant déjà des propriétés à éditer (address, city, zipcode, ...) et de leur getter / setters associés.
Partant de ce constat, Alexandre Salomé m'a proposé de créer un FormType "virtual"! Lorsque cet attribut est passé à true, le FormType créé utilisera les propriétés de l'objet parent!
Mais un exemple concret sera sans aucun doute bien plus parlant!

Nous avons donc une entité contenant entres autres, différents champs d'adresse:

<?php

namespace ...;

class Company
{
    private $name;
    private $address;
    private $city;

    public function getName()
    {
        return $this->name;
    }
    public function setName($name)
    {
        $this->name = $name;
    }

    public function getAddress()
    {
        return $this->address;
    }
    public function setAddress($address)
    {
        $this->address = $address;
    }

    public function getCity()
    {
        return $this->city;
    }
    public function setCity($city)
    {
        $this->city = $city;
    }
}

On se contentera ici de trois propriétés seulement (dont une qui ne nous sera pas utile pour notre FormType).

Passons à notre FormType:

<?php

namespace ...;

use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilder;

class LocalisationType extends AbstractType
{
    public function buildForm(FormBuilder $builder, array $options)
    {
        $builder
            ->add('address', 'textarea')
            ->add('city', 'text')
        ;
    }

    public function getDefaultOptions(array $options)
    {
        return array(
            'virtual' => true
        );
    }

    public function getName()
    {
        return 'localisation';
    }
}

Notez l'utilisation de l'option "virtual".
Lors du bind de votre formulaire, notre FormType étant virtuel, ce sont les propriétés de notre objet parent (ici Company) qui seront mis à jour par le PropertyPath.

Bien entendu, il s'agit d'un FormType extrèmement basique! Vous souhaiterez prabablement ajouter quelques champs supplémentaires...

Dernière chose, n'héitez pas à passer les noms des champs qui composent votre adresse en option au FormType Localisation de manière à être plus souple.

]]>
<![CDATA[Utiliser le bootstrap twitter avec symfony2]]> 2011-11-11T00:00:00+01:00 https://odolbeau.fr/blog/utiliser-le-bootstrap-twitter-avec-symfony2.html Je me l'étais promis depuis longtemps, je l'ai enfin fait! Tester un minimum Less avec Assetic (tant qu'à faire!).

Et histoire de jeter un petit coup d'oeil sur cet outil magnifique, autant le faire dans les meilleurs conditions. Pour ça, merci twitter et son bootstrap, qui utilise lui aussi Less (avec Preboot.less pour être plus précis).

Première chose à faire, installer less et configurer Assetic dans votre projet sf2. Pour ce faire, vous pouvez suivre l'excellent tuto de Bertrand Zuchuat (qui s'adresse avant tout aux possesseurs de Mac, mais vous devriez pouvoir le suivre sans trop d'adaptations quelque soit votre OS).

Une fois fait, il ne vous reste plus qu'à utiliser le bootstrap twitter.

Ce que devrait être rapide puisque phiamo et hidenorigoto ont déjà fait tout le travail pour vous avec le MopaBootstrapBundle.

Il ne vous reste donc qu'à installer ce bundle en suivant les instructions d'installation et vous pourrez ainsi tirer pleinement profit du bootstrap twitter (y compris dans vos formulaires puisque le style par défaut a été revu et travaillé pour l'occasion!)

]]>
<![CDATA[Des blogs BDs à la pelle...]]> 2011-08-14T00:00:00+02:00 https://odolbeau.fr/blog/des-blogs-bds-a-la-pelle.html Fans de BD? Ce post est pour vous! :)

J'ai récemment partagé sur twitter une liste de flux rss de blogs bds, connus ou moins connus et majoritairement en français (avec quelques exceptions tout de même). Cette liste est ici!

Vous pouvez directement vous y abonner grâce au flux ATOM (si celle-ci vous semble parfaite! :P) ou récupérer le fichier OPML contenant la liste des flux contenus dans cette liste! :) Ce dernier vous permettra de supprimer certains flux rss si ceux-ci ne vous intéressent pas! :)

Quelques-uns des 18 flux actuellement présents dans la liste:

Bien entendu, si vous connaissez d'autres blogs BDs dans le genre de ceux réunis dans cette liste, n'hésitez pas à m'en faire part dans les commentaires! :D

]]>