Lorsque j'arrive dans un projet qui utilise GNU Make, je suis content ! 🥰
Cependant, il est courant que j'ai besoin (ou plutôt envie) de définir mes propres targets : parfois spécifiques à mon environnement, parfois très proches d'une target existante mais avec LA variation que j'apprécie, parfois spécifiques à une problématique, ...
Ces targets n'ont souvent pas grand intérêt pour les collègues (qui ont sans doute leurs propres envies avec LA variation qu'iels apprécient), les ajouter directement dans le Makefile n'est donc pas pertinent, voire serait contreproductif: si tout le monde faisait pareil, on se retrouverait bien vite avec un fichier de plusieurs centaines de lignes et donc difficilement lisible.
Heureusement, il existe plusieurs solutions !
La première solution consiste à modifier le fichier Makefile existant pour y glisser une petite ligne toute bête mais diablement efficace :
-include Makefile.local
la ligne parle d'elle même : cela permet d'inclure un fichier Makefile.local
(situé dans le même répertoire) et le -
devant l'instruction permet de ne pas lever d'erreur si ce fichier n'existe pas.
Il suffit ensuite d'ajouter Makefile.local
dans le .gitignore
du projet, de commiter le tout et mission accomplie : toute l'équipe pourra déclarer ses propres targets dans un fichier Makefile.local
sans impacter les collègues ! 🥳
Par fois il ne sera peut-être pas possible (ou trop long) de modifier le fichier Makefile existant. Pas de panique, il existe une autre solution !
Lorsqu'elle est lancée, la commande make
va vérifier l'existence des fichiers GNUmakefile
, makefile
et Makefile
(dans cet ordre) pour trouver sa configuration.
C'est l'utilisation d'un fichier Makefile
qui est officiellement recommandé dans la documentation et si on en croit github, c'est effectivement la solution la plus plébiscitée : 30k occurrences de GNUmakefile, 224k occurrences de makefile et... pas moins de 3,1 millions d'occurrences de Makefile.
Sachant ça, il est donc possible de jouer avec cette notion de priorité pour créer un fichier GNUmakefile
et y ajouter la ligne suivante :
include Makefile
Même stratégie que précédemment mais cette fois inversée : on inclut le fichier Makefile existant avant de déclarer ses propres targets.
Pour ma part, j'ai ajouté GNUmakefile
dans mon .gitignore
global pour être sur que ce fichier ne soit pas versionné.
Voici un exemple de GNUmakefile
que j'utilise lorsque j'en ai besoin. Je vous laisse copier ce fichier dans un projet existant pour admirer le résultat. :)
If you're working on a fast moving project, it's easy to have hundreds of migrations living in your project.
I have a question for you: Do you really think you will have to rollback the migration Version20190503193054.php
(which is in your repository since more than 1 year) one day or another?
If the answer is "yes", I'll be glad to hear your arguments on twitter. Otherwise, you may be interested by this article.
doctrine:migrations:rollup
commandEven if it's not documented (yet?), doctrine provides a feature to get rid of all your useless migrations.
On paper, it's pretty simple:
doctrine:migrations:dump-schema
.doctrine:migrations:rollup
in productionAs you may have noticed, even if it looks simple, deploying a new migration containing the whole creation of your database in production is not a good idea. The migration contains all queries needed to create your whole schema but you don't want to run them on an existing database (it will fail anyway as your database already contains those tables).
To avoid this problem, there is a simple solution. You can alter your migration manually to skip it when tables already exist in the schema. To achieve this, you can use the schema manager at the beginning of the migration.
if ($this->sm->tablesExist('member')) {
return;
}
If the member
table already exists (which is probably the case in production if you have a table named like this) the migration will be skipped.
Everything's fine, you can now deploy your migration & run the doctrine:migrations:rollup
command.
With the process described in the previous chapter, you have to manually run the rollup command on your production. This step can easily be avoided!
In your deployment process, you probably have post deployment scripts (to automatically apply your migrations in production for example?). If it's the case, you can add those few lines to automatically launch the rollup command if relevant.
if [ 1 == `ls -1 $PATH_TO_MIGRATIONS/ | wc -l` ]; then
php bin/console doctrine:migrations:rollup
fi
Here it is! If there is one (and only one) migration available, it means you can make a rollup automatically. Don't worry, this command won't fail (nor do anything) if you launch it several times with the same migration.
I hope this article will help you remove all these useless files living in your project. :)
Let met know on twitter if it helps or f you have any question!
]]>As you may know, I'm pretty familiar with chef and I use it almost every day, for both professional & personal stuff. Despite that, I am quite willing to try something else and Ansible is a well known (and used!) configuration management tool. I know a lot of people who are quite pleased to use it!
Furthermore, I changed my laptop last week so it was the perfect occasion to give it a try :).
As you will see, Ansible is really easy to use.
As I said, my goal is to automatically install a laptop development. I use debian and I will only focus on it. :)
I would like:
First of all, let's start with the project tree:
.
├── bin
│ └── bootstrap
├── laptop.yml
├── Makefile
├── README.md
└── roles
└── common
├── files
│ └── ssh
│ ├── config
│ ├── id_rsa
│ └── id_rsa.pub
└── tasks
├── main.yml
└── nginx.yml
There aren't a lot of files which is a good point for maintainability, right? :)
Furthermore, the installation process is very simple:
bin/boostrap
commandmake install
As you can see, I only have 2 commands to run: it seems one of my goal is already reached! \o/
Let's explain those 2 steps.
As ansible
isn't installed by default on your laptop, the goal of the bootstrap is to install it. Furthermore, I don't want to deal with the ansible
command line because there are several arguments to include and I'm used too make install
everything; that's why I need make
too. Finally, as ansible will be run by my user and not by root, I need to have some privileges, that's why I also install sudo
and grant all privileges to the current user.
Here is the boostrap script:
#!/bin/bash
echo "Installing sudo, make & ansible, and allow user \"${USER}\" to run any command with sudo..."
LOCAL_USER=${USER} su -c 'apt-get install sudo make ansible && echo "${LOCAL_USER} ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers.d/${LOCAL_USER}'
Once all prerequisites are installed, we can use ansible.
As I said, I use a Makefile. It contains only one command:
.PHONY: ${TARGETS}
install:
ansible-playbook -i '127.0.0.1,' laptop.yml --ask-vault-pass
We simply ask ansible to run the playbook named laptop.yml
on 127.0.0.1
.
Forget the --ask-vault-pass
option for now, we'll discuss it later! ;)
As said before, we ask ansible to run a playbook. In our case, it's called laptop.yml
and here is the content of this file:
---
- hosts: 127.0.0.1
connection: local
roles:
- common
The only impacted host is 127.0.0.1
.
We use the local connection (you can use ssh to configure a remote server for instance).
Then we list all roles which concern our host.
It's a very simple playbook and I won't go into details on this subject for two reasons:
If you're interested in anyway, you can take a look at the official documentation.
Our playbook mention only 1 role: common
.
Let's have a look at it:
roles/common/
├── files
│ └── ssh
│ ├── config
│ ├── id_rsa
│ └── id_rsa.pub
└── tasks
├── main.yml
└── nginx.yml
It contains several files related to ssh and two tasks respectively called main.yml
and nginx.yml
.
You will always have a main.yml
task in a role as it's the default entry point. Here is an extract of this file:
---
- name: install packages
become: true
apt: name="" state=present
with_items:
- ack-grep
- composer
- curl
- make
# ...
- name: Install slack
apt:
deb: https://downloads.slack-edge.com/linux_releases/slack-desktop-2.1.0-amd64.deb
state: present
become: true
- name: Install ssh keys
copy:
src: "ssh/"
mode: "0644"
dest: /home/odolbeau/.ssh/
with_items:
- id_rsa
- id_rsa.pub
- name: Install ssh config
copy:
src: "ssh/config"
mode: "0644"
dest: /home/odolbeau/.ssh/
- name: Download dot files from github
git: repo=ssh://git@github.com/odolbeau/dot-files.git dest=/home/odolbeau/dot-files
- name: Install dot files
command: make -C /home/odolbeau/dot-files install
- name: Download VIM configuration from github
git: repo=ssh://git@github.com/odolbeau/vim-config.git dest=/home/odolbeau/vim-config
- name: Install VIM configuration
command: make -C /home/odolbeau/vim-config install
- include: nginx.yml
There are several instructions in this file. As you may have noticed, everything is in yaml and clearly understandable.
Let's explain some of this instructions:
- name: install packages
become: true
apt: name="" state=present
with_items:
- ack-grep
- composer
- curl
- make
# ...
Most of the ansible instructions speak for themselves!
In this case, we create a task which will use the apt
module to install a package. This task will be run with several items listed under with_items
key.
The become: true
option is used to run this task as root (cause the default value for become_user
is root).
- name: Install slack
apt:
deb: https://downloads.slack-edge.com/linux_releases/slack-desktop-2.1.0-amd64.deb
state: present
become: true
In this case, we still use the apt
module to install a remote package.
Notice that you can use an inline syntax like in the first example with apt: deb="..."
or the extended syntax like here.
- name: Install ssh config
copy:
src: "ssh/config"
mode: "0644"
dest: /home/odolbeau/.ssh/
Again, a very easy to understand task! I simply want to copy files coming from my roles (placed under my_role/files/
) on my laptop. Easy! \o/
- name: Download dot files from github
git: repo=ssh://git@github.com/odolbeau/dot-files.git dest=/home/odolbeau/dot-files
- name: Install dot files
command: make -C /home/odolbeau/dot-files install
Those 2 tasks are used to install my dot-files. The first one uses git to download the repository and the second executes a make install
inside the correct folder.
I won't list all modules I use though. There are plenty of them and their documentation is very clear! Don't forget to have a look at existing modules before running a command by yourself. :)
You know everything you need to start to use ansible by yourself for a single host!
Let's confess: sometimes, it's hard / painful / time-consuming / impossible to do everything with a configuration management.
For instance, in my case, I need to install a VPN client and to create a tunnel in order to download some private projects.
Once the VPN is installed, here is what I use:
- command: ping -c 1 "a.private.url"
register: vpn_connected
ignore_errors: True
- pause:
prompt: "Make sure to run the VPN in order to continue the installation. [Press any key once done]"
when: vpn_connected|failed
I try to ping a private URL. I register the result of this command inside the vpn_connected
var.
Then I use the pause
module. If the tunnel isn't running, I simply ask the user to launch it, otherwise, it keeps going!
Of course, the goal is not to use this trick every time: if your users have to do everything manually, you're not using a configuration management tool correctly! Use this only when you really can't configure something automatically.
As previously explained, I use ansible to install my private ssh keys. Even if all my private keys are protected by a passphrase, I don't want to version them without encryption!
In my case, I use a private repository to store my ansible configuration. In this situation, it's not really necessary to encrypt your keys but as you will see, it's very easy to do! :)
Of course encrypt keys / passwords / files is a common use case and Ansible propose a very powerful solution to deal with it: Vault.
It's shipped with the ansible
package and it's easy to use! I mean, really easy!
If you want to encrypt your ssh keys:
# Copy files into your role
cp ~/.ssh/id_rsa ~/.ssh/id_rsa.pub my_role/files/
# Encrypt them!
ansible-vault crypt my_role/files/id_rsa my_role/files/id_rsa.pub
New Vault password:
Confirm New Vault password:
Encryption successful
And that's it! Your files are now encrypted —congrats by the way! :)
As soon as you have encrypted file in a role, you'll have to add the --ask-vault-pass
option when running the ansible-playbook
command.
I hope I convinced you: it's pretty convenient to write an ansible playbook to install your laptop! In case you change it, you will gain a lot of time running 2 commands instead of installing everything manually! :)
Ansible is easy to use which make it a good choice to fulfill this need, but there are a lot of other configuration management tools out there, so don't hesitate to take a look at them! :)
]]>For those who don't know it, I made a presentation of the ELK stack at the PHP Forum in Paris (in French). During this talk, I mainly talked about our (BlaBlaCar) technical logs. I also mentioned that we use ELK to make some marketing dashboards too (signups by country, payments and so on.) even if I said thaht I don't consider ELK as the best tool to do this.
Last week Claude Duvergier (@C_Duv) asked me on twitter which tool I would recommend instead of ELK. This make me think that in fact, ELK is not a bad choice, even if some others tools exist.
Disclaimer: I'm definitely not aware of all existing tools to make marketing dashboards. I will talk only about solutions I already used. Sorry if I miss something important. This article is more my personal feedback regarding some tools we use than a real comparatif.
Let me start by the one I probably know the most. ELK is very easy to setup and to use. You just have to send all important events to logstash or anything else (in our case, we send it to a RabbitMQ broker) and store them in ElasticSearch. Kibana will then help you to display all this events.
By design, all elasticsearch queries are made on the fly. ElasticSearch is "just" a data store and don't aggregate anything. You're able to update your queries when you want, filter results, etc directly in Kibana which is really appreciable. The drawback is the time needed to render some graphs for a long period.
Another point to consider with ELK is the storage needed for all your data. It can be quite huge depending on what you choose to store and on your retention policy.
Pros:
Cons:
For those who don't know it, NewRelic is an Application Performance Management (APM). I won't talk a lot about it because even if you can make some graphs based on pages frequentation, it's definitely not the solution to our problem.
I choose to put it in this list only because we use some New Relic dashboards for technical needs (to check registrations & payments after a deployment for example) but it can looks like a marketing dashboard.
Pros:
Cons:
Recently introduced at BlaBlaCar, I am fallen in love with this time series database. For now, we use it for code instrumentation purpose and we only have technical dashboards (and a bunch of issues with collected data. :P). The latest stable release of InfluxDB is the 0.8.5 and the cluster support is experimental. Despite all of this, it works well. :) It's not the goal here so I will not talk about the design of InfluxDB. You are strongly enouraged to take a look at it.
Like ELK, you need to send events with all relevant informations you need. This time however, you shouldn't query the created serie. You have to create some continuous queries in order to split your data.
For example in our case, we send an event for each http queries. This event contains the called route and we have a continuous query which create new series per route and calculate the average response time and memory usage with an aggregation of 1 minute. Another continuous query make the same, every hour.
To create dashboards with data from InfluxDB, we use grafana, a kibana clone which supports InfluxDB, Graphite and OpenTSDB. With this system, we always query the same series. It's really fast, even if you display a lot of series on a large period.
Pros:
Cons:
I know that there is a lot of others solutions available to create marketing dashboards. I hesitated to talk about Hadoop / Vertica / Tableau which are also used at BlaBlaCar for data analysis. However as I don't personally use these tools, they doesn't fill my initial requirements.
Despite what I told during my talk at ForumPHP Paris, ELK IS a very good choice even more if you already use this stack to analyze your logs and if you don't need to display a long period for your marketing dashboard. If you have more time and want to use a time series database, I recommend you InfluxDB even if it's a really young software. :)
If you use another tool and think it's THE perfect solution, let me know. :)
]]>Everything start on 7 April 2014. We gave a talk with Grégoire Pineau (aka @lyrixx) called "Making asynchronous tasks in PHP" (slides are available on speakerdeck).
During this talk we trolled about the php-amqplib created by Alavaro Videla and maintained by a lot of users.
This library is one of the 2 main ways to talk to an amqp broker. The other one is the php extension.
Anyway, the subject has come back a few days ago on twitter (conversation is in French) when Ölbaum asked us if we still recommend to not use the php-amqplib. Some tweets later, I proposed to make a small benchmark to compare these 2 ways of talking to a broker. Because trolling is good but sometimes, having some real arguments is better.
I chose to use Swarrot to write as few code as possible for each implementation.
The full project used for this benchmark can be found on github. Of course, feel free to contribute and complete it!
And now, the results. Every tests have been launched 3 times. What you see here is the average time of these 3 launches.
Code is here The concerned queue has been purged before each launch.
./bench publish [ext|lib] -m 1000000
+-------------+-------------+
| Duration | 36 seconds |
+-------------+-------------+
| Memory peak | 1.5 MiB |
+-------------+-------------+
+-------------+-------------+
| Duration | 46 seconds |
+-------------+-------------+
| Memory peak | 2.5 MiB |
+-------------+-------------+
The duration difference is pretty small between the extension and the library. In both case the memory consumption is very stable (I tried with 100, 1k, 10k, 100k messages, the memory consumption is near the same).
Code is here.
./bench get [ext|lib] -m 1000000
+-------------+-------------+
| Duration | 19 seconds |
+-------------+-------------+
| Memory peak | 1.8 MiB |
+-------------+-------------+
+-------------+-------------+
| Duration | 43 seconds |
+-------------+-------------+
| Memory peak | 2.8 MiB |
+-------------+-------------+
For the memory consumption, again, nothing to say. It's low and stable in both case. To consume messages, the pecl extension is more than 2 times faster than the library.
The extension is faster than the library. C is faster than PHP. Is it really surprising? No! It's not really pertinent to compare 2 tools which obviously make the same job but have different implementations.
The main difference is the installation. The library is VERY simple to install!
You just have to add "videlalvaro/php-amqplib": "~2.4"
in your composer.json
and you're done. On the contrary, for the extension, you generally need to
compile the rabbitmq-c
(an AMQP client in C used by the php extension) which
can be a bit boring.
So, if you already installed the extension or if it's not a problem for you, go for it! Otherwise, don't panic and use the library!
In both case, take a look at Swarrot (and the SwarrotBundle) to be able to change your choice if needed.
]]>If you're not familiar with docker it's not a problem! You just need to install it and to continue reading. :)
For those who already use chef, I'm sur you write a lot of tests to check your cookbooks isn't it? :) And you know that it can take a (very) long time to run the full test suite on different VM.
But don't worry, from now on, it's over! Look at kitchen-docker.
Just add kitchen-docker
in your Gemfile
(or install it directly with gem
install kitchen-docker
) and you can now start to use docker instead of
Vagrant. \o/ Just replace you're current driver in .kitchen.yml
:
driver:
name: docker
If you use docker on a Mac (with boot2docker) or inside another machine, you also need to change the socket used by the docker daemon :
platforms:
- name: ubuntu-12.04
driver_config:
socket: tcp://docker.example.com:4242
And that's all, you can now launch kitchen test
and see the result. :)
On some cookbooks used at BlaBlaCar, running the full test suite is near 60% quicker than before.
Of course, it's not the perfect solution and there is some drawbacks. For
example :
* the cron
service is not automatically launched
* on my local environment (mac), when chef change the DNS used in your docker,
some tests fails.
* some of our tests need a VM with more than 1 disk. Not possible with docker.
Apart from this problems, we save time everyday. \o/
]]>I'm sure you already know Monolog, the (almost) perfect logging library for PHP. :)
I strongly suggest you to read the core concepts of Monolog if you're not familiar with channels, handlers and processors. In 2 words, channel is the name of the logger, handlers are its outputs and processors are here to add extra information in your logs.
As we will see at the end of this article, you can make really interesting filters with your channels and the extra data added by your processors.
Gelf means Graylog Extended Log Format. This new format has been created by Graylog to avoid all syslog inconvenients like the length limit and the lack of data types and compression.
Gelf messages can be sent by UDP (fortunately !) and of course, the awesome Monolog provides a GelfHandler.
Here is an example of a gelf message (can be found in the specs) :
{
"version": "1.1",
"host": "example.org",
"short_message": "A short message that helps you identify what is going on",
"full_message": "Backtrace here\n\nmore stuff",
"timestamp": 1385053862.3072,
"level": 1,
"_user_id": 9001,
"_some_info": "foo",
"_some_env_var": "bar"
}
As described in the specs, some information are mandatory (but don't worry about it, just let Monolog deal with it) and you can add as many information as you wish.
Here is a small example of a custom handler to log gelf messages in logstash:
#config_dev.yml
monolog:
...
handlers:
my_logstash_handler:
type: gelf
publisher:
hostname: %logstash_host%
port: %logstash_port%
formatter: monolog.formatter.gelf_message
level: INFO
ELK is an acronym for ElasticSearch / Logstash / Kibana.
ElasticSearch is a very powerful distributed search engine which provides a RESTful API. In the ELK stack, ElasticSearch is the storage backend. All our logs will be stored insite an index.
Logstash has been created to manage logs. It collects, parses and stores them. There is a lot of existing inputs (41), filters (50) and outputs (55). For example, look at this configuration file :
input {
gelf {
codec => "json"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
We have configured a single input which is gelf. As we saw it, by default, gelf logs are sent through UDP on port 12201 and of course, logstash knows it.
There is no filter in this configuraton as we don't really need it for this example. By the way, the gelf message will be directly sent to ElasticSearch.
And finally, there is an elasticsearch_http output. So, logstash will call the ElasticSearch API to insert logs into an index, generated per day.
You can take a look at the full documentation for the gelf input and the elasticsearch_http output to have more information.
Kibana is a very powerful tool to see and interact with your data. It's very easy to use and you can create a lot of dashboards to visualize all your logs. Take a look at the project homepage to see some examples.
Is there a particular error in your production environment ? Create a dashboard just for it ! You will have all available information in a single place, it will be easier to aggregate information and understand when the error occurred.
Of course, you all know
PSR-3,
which defined a standard PSR\Log\LoggerInterface
(used by Monolog obviously).
But did you read the "Context"
section
? Did you notice that all methods defined in the interface take a $context
array as second argument ? Do you use it ? No ? You should !
This context is the best way to provide more information with your log. You can easily add all needed information to know WHEN an error (for example) occurred. And once you send all this context to the ELK stack, you can easily filter your logs according to the context. Does this error occur every time with the same user ? Is it only with this particular entity ? Anyway, just add context and you will be able to group your logs according to it. :)
]]>A cache warmer is just a class that writes in a file to store data. Really simple.
Look into you're app/cache
directory. Here is an example of what you can find:
Lot of things, created by cache warmers.
Juste create a file that extend the CacheWarmerInterface provided by the HttpKernel Component.
<?php
use Symfony\Component\HttpKernel\CacheWarmer\CacheWarmerInterface;
class MyCustomeCacheWarmer implements CacheWarmerInterface
{
public function warmUp($cacheDir)
{
// Create a file here
// Write in it
}
public function isOptional()
{
// By default, all CacheWarmer are called by the `app/console cache:warmup` command.
// But you can specify the `--no-optional-warmers` to skip non optional ones.
return false;
}
}
For the implementation, you can take a look at the TemplatePathsCacheWarmer for example.
Of course, you have to declare your CacheWarmer as a service. Do not forget to tag it with kernel.cache_warmer
:
<service id="my_custom.cache_warmer" class="path/to/MyCustomCacheWarmer">
<tag name="kernel.cache_warmer" />
</service>
Now, if you run the command app/console cache:warmup
(or app/console cache:clear
without the --no-warmup
option) your cache file should be created. (Search it in the app/cache/
folder).
To use your cached data, it's very simple. Look at TemplateLocator.
The TemplateLocator
simply require the cached file and store the result into a var. To do the same thing, simply store a return statement in your file. For example, the app/cache/dev/templates.php
file looks like this:
<?php return array (
'template_name' => 'path/to/template',
// ...
),
You're done. You can now use your custom CacheWarmer in your application. :)
]]>Wait! What? You already write this before! Oo
Yes I do! But in french! And it seems it was not very clear... So let me explain virtuals forms again and this time... in english!
We have 2 entities. A Company and a Customer :
<?php
namespace ...;
class Company
{
private $name;
private $website;
private $address;
private $zipcode;
private $city;
private $country;
// Some nice getters / setters here.
}
<?php
namespace ...;
class Customer
{
private $firstName;
private $lastName;
private $address;
private $zipcode;
private $city;
private $country;
// Some nice getters / setters here.
}
Like you can see, both of our entities have these fields: address
, zipcode
, city
, country
.
Now, we want to build 2 forms. One for create/update a Company and the second to create/update a Customer.
Of course, we have only two entities which have to contains some location informations... for now! Maybe later, some entities will have this fields. So, we have to find a solution to not duplicate our code!
First, we create very simple CompanyType and CustomerType:
<?php
namespace ...;
use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilder;
class CompanyType extends AbstractType
{
public function buildForm(FormBuilder $builder, array $options)
{
$builder
->add('name', 'text')
->add('website', 'text')
;
}
public function getDefaultOptions(array $options)
{
return array(
'data_class' => '...\Company',
);
}
public function getName()
{
return 'company';
}
}
<?php
namespace ...;
use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilder;
class CustomerType extends AbstractType
{
public function buildForm(FormBuilder $builder, array $options)
{
$builder
->add('firstName', 'text')
->add('lastName', 'text')
;
}
public function getDefaultOptions(array $options)
{
return array(
'data_class' => '...\Customer',
);
}
public function getName()
{
return 'customer';
}
}
Definitely nothing complicated here.
Now, we have to deal with our four duplicated fields... Here is a (simple) location FormType:
<?php
namespace ...;
use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilder;
class LocationType extends AbstractType
{
public function buildForm(FormBuilder $builder, array $options)
{
$builder
->add('address', 'textarea')
->add('zipcode', 'string')
->add('city', 'string')
->add('country', 'text')
;
}
public function getDefaultOptions(array $options)
{
return array(
);
}
public function getName()
{
return 'location';
}
}
We can't specify a data_class option in this FormType because, we don't have a Location Entity.
We don't have a location field in our entity so we can't directly link our LocationType.
Of course, we absolutely want to have a dedicated FormType to deal with location (remember, DRY!)
There is a solution!
We can set the option 'virtual' => true
in the getDefaultOptions
method of our LocationType and directly start use it in our 2 first types.
Look at the result:
<?php
// CompanyType
public function buildForm(FormBuilder $builder, array $options)
{
$builder->add('foo', new LocationType());
}
<?php
// CustomerType
public function buildForm(FormBuilder $builder, array $options)
{
$builder->add('bar', new LocationType());
}
With the virtual option set to false (default behavior), the Form Component expect a Foo (or Bar) object or array which contains our four location fields. Of course, we don't have this object/array in our entities and we don't want it!
With the virtual option set to true, the Form Component skip our Foo (or Bar) object or array. So, it directly access to our 4 location fields which are in the parent entity!
(One more time, thank to Alexandre Salomé for the tips)
]]>J'ai été amené, il y a quelques jours, à utiliser l'attribut virtual du composant Form de symfony2. Le besoin était on ne peut plus simple: Créer un FormType adapté à nos besoins pour afficher un formulaire d'adresse. Ce dernier devant bien entendu être utilisé au sein de plusieurs entités disposant déjà des propriétés à éditer (address, city, zipcode, ...) et de leur getter / setters associés.
Partant de ce constat, Alexandre Salomé m'a proposé de créer un FormType "virtual"! Lorsque cet attribut est passé à true, le FormType créé utilisera les propriétés de l'objet parent!
Mais un exemple concret sera sans aucun doute bien plus parlant!
Nous avons donc une entité contenant entres autres, différents champs d'adresse:
<?php
namespace ...;
class Company
{
private $name;
private $address;
private $city;
public function getName()
{
return $this->name;
}
public function setName($name)
{
$this->name = $name;
}
public function getAddress()
{
return $this->address;
}
public function setAddress($address)
{
$this->address = $address;
}
public function getCity()
{
return $this->city;
}
public function setCity($city)
{
$this->city = $city;
}
}
On se contentera ici de trois propriétés seulement (dont une qui ne nous sera pas utile pour notre FormType).
Passons à notre FormType:
<?php
namespace ...;
use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilder;
class LocalisationType extends AbstractType
{
public function buildForm(FormBuilder $builder, array $options)
{
$builder
->add('address', 'textarea')
->add('city', 'text')
;
}
public function getDefaultOptions(array $options)
{
return array(
'virtual' => true
);
}
public function getName()
{
return 'localisation';
}
}
Notez l'utilisation de l'option "virtual".
Lors du bind de votre formulaire, notre FormType étant virtuel, ce sont les propriétés de notre objet parent (ici Company) qui seront mis à jour par le PropertyPath.
Bien entendu, il s'agit d'un FormType extrèmement basique! Vous souhaiterez prabablement ajouter quelques champs supplémentaires...
Dernière chose, n'héitez pas à passer les noms des champs qui composent votre adresse en option au FormType Localisation de manière à être plus souple.
]]>