From aba2696d82632e2f4e58a47e6b7240951b34547b Mon Sep 17 00:00:00 2001
From: Kai Moritz
-Das Problem ist nicht unbekannt. Es gibt unzählige Anleitungen, wie man die Banner-Auslieferung mit Hilfe der Single-Page-Call-Technik beschleunigen kann. Single-Page-Call fast die Anfragen, die für die einzelnen Banner an den Ad-Server gestellt werden müssen, in eine Anfrage zusammen und beschleunigt dadurch die Banner-Auslieferung, da unnötige HTTP-Anfragen vermieden werden. Doch das eigentliche Problem wird dadurch nur verringert - nicht behoben: -
-
-Der Browser muss ein <scrpt>-Tag in dem Moment laden und ausführen, in dem er es in dem HTML-Quellcode der Seite vorfindet. Denn es könnte z.B. einen document.write()-Aufruf enthalten, der die Seite an Ort und Stelle modifiziert. Verschärft wird dieser Umstand weiter dadruch, dass der Browser keine anderen Ressourcen laden darf, während er das Skript herunterlädt.
-
-Dieser Umstand fällt besonders dann schnell unangenehm auf, wenn OpenX als "Banner" wiederum einen JavaScript-Code eines anderen Ad-Servers (z.B. Google-Ads) ausliefert, so dass sich die Wartezeiten, bis der Browser mit dem Rendern der Seite fortschreiten kann, aufaddieren. Wenn nur einer der Ad-Server in so einer Kette gerade überlastet ist und langsam reagiert, muss der Browser warten! -
-
-Die Lösung dieses Problems ist altbekannt. Die JavaScript-Tag's werden an das Ende der HTML-Seite verschoben. Möglichst direkt vor das schlieÃende </body>-Tag. Ein einfacher Ansatz hierfür wäre, einfach die Banner möglichst nah an das Seitenende zu schieben und dann via CSS zu platzieren. Aber dieser Ansatz funktioniert nur mit Bannern vom Typ Superbanner oder Skyscraper. Sobald der Banner im Inhalt stehen soll, wird es schwer (bis unmöglich) dafür via CSS die richtige Menge Platz zu reservieren.
-
-AuÃerdem wäre es noch schöner, wenn man das Laden der Banner erst dann anstoÃen könnte, wenn die Seite vollständig geladen ist (und/oder die eigenen Skripte angestoÃen/abgearbeitet wurden), also z.B. über das JavaScript-Event window.onload, so daà die Seite bereits voll einsatzfähig ist, bevor die Banner fertig geladen sind.
-
Das klingt alles einfach und schön - doch wie so oft gilt leider: -
-
-/** Optimierte Methoden für die Werbe-Einblendung via OpenX */
-
-/** see: http://enterprisejquery.com/2010/10/how-good-c-habits-can-encourage-bad-javascript-habits-part-1/ */
-
-(function( coolibri, $, undefined ) {
-
- var
-
- /** Muss angepasst werden, wenn die Zonen in OpenX geändert/erweitert werden! */
- zones = {
- 'oa-superbanner' : 15, // Superbanner
- 'oa-skyscraper' : 16, // Skyscraper
- 'oa-rectangle' : 14, // Medium Rectangle
- 'oa-content' : 13, // content quer
- 'oa-marginal' : 18, // Restplatz marginalspalte
- 'oa-article' : 17, // Restplatz unter Artikel
- 'oa-prime' : 19, // Prime Place
- 'oa-gallery': 23 // Medium Rectangle Gallery
- },
-
- domain = document.location.protocol == 'https:' ? 'https://openx.coolibri.de:8443':'http://openx.coolibri.de',
-
- id,
- node,
-
- count = 0,
- slots = {},
- queue = [],
- ads = [],
- output = [];
-
-
- coolibri.show_ads = function() {
-
- var name, src = domain;
-
- /**
- * Ohne diese Option, hängt jQuery an jede URL, die es per $.getScript()
- * geholt wird einen Timestamp an. Dies kann mit bei Skripten von Dritt-
- * Anbietern zu Problemen führen, wenn diese davon ausgehen, dass die
- * Aufgerufene URL nicht verändert wird...
- */
- $.ajaxSetup({ cache: true });
-
- src += "/www/delivery/spc.php?zones=";
-
- /** Nur die Banner holen, die in dieser Seite wirklich benötigt werden */
- for(name in zones) {
- $('.oa').each(function() {
- var
- node = $(this),
- id;
- if (node.hasClass(name)) {
- id = 'oa_' + ++count;
- slots[id] = node;
- queue.push(id);
- src += escape(id + '=' + zones[name] + "|");
- }
- });
- }
-
- src += "&nz=1&source=" + escape(OA_source);
- src += "&r=" + Math.floor(Math.random()*99999999);
- src += "&block=1&charset=UTF-8";
-
- if (window.location) src += "&loc=" + escape(window.location);
- if (document.referrer) src += "&referer=" + escape(document.referrer);
-
- $.getScript(src, init_ads);
-
- src = domain + '/www/delivery/fl.js';
- $.getScript(src);
-
- }
-
- function init_ads() {
-
- var i, id;
- for (i=0; i 0) {
-
- var result, src, inline, i;
-
- id = ads.shift();
- node = slots[id];
-
- node.slideDown();
-
- // node.append(id + ": " + node.attr('class'));
-
- /**
- * Falls zwischenzeitlich Ausgaben über document.write() gemacht wurden,
- * sollen diese als erstes (also bevor die restlichen von dem OpenX-Server
- * gelieferten Statements verarbeitet werden) ausgegeben werden.
- */
- insert_output();
-
- while ((result = /<script/i.exec(OA_output[id])) != null) {
- node.append(OA_output[id].slice(0,result.index));
- /** OA_output[id] auf den Text ab "]*)>([\s\S]*?)/i.exec(OA_output[id]);
- if (result == null) {
- /** Ungültige Syntax in der OpenX-Antwort. Rest der Antwort ignorieren! */
- // alert(OA_output[id]);
- OA_output[id] = "";
- }
- else {
- /** Iinline-Code merken, falls vorhanden */
- src = result[1]
- inline = result[2];
- /** OA_output[id] auf den Text nach dem schlieÃenden -Tag kürzen */
- OA_output[id] = OA_output[id].slice(result[0].length,OA_output[id].length);
- result = /src\s*=\s*['"]([^'"]*)['"]/i.exec(src);
- if (result == null) {
- /** script-Tag mit Inline-Anweisungen: Inline-Anweisungen ausführen! */
- result = /^\s* 0)
- /** Der Banner-Code wurde noch nicht vollständig ausgegeben! */
- ads.unshift(id);
- /** So - jetzt erst mal das Skript laden und verarbeiten... */
- $.getScript(result[1], render_ads); // << jQuery.getScript() erzeugt onload-Handler für _alle_ Browser ;)
- return;
- }
- }
- }
-
- node.append(OA_output[id]);
- OA_output[id] = "";
- }
-
- /** Alle Einträge aus OA_output wurden gerendert */
-
- id = undefined;
- node = undefined;
-
- }
-
- /** Mit dieser Funktion werden document.write und document.writeln überschrieben */
- function document_write() {
-
- if (id == undefined)
- return;
-
- for (var i=0; i 0) {
- output.push(OA_output[id]);
- OA_output[id] = "";
- for (i=0; i<output.length; i++)
- OA_output[id] += output[i];
- output = [];
- }
-
- }
-
-} ( window.coolibri = window.coolibri || {}, jQuery ));
-
-/** Weil sich der IE sonst ggf. über die nicht definierte Variable lautstark aufregt, wenn irgendetwas schief geht... */
-var OA_output = {};
-
-
-
-
-- Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do - eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim - ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut - aliquip ex ea commodo consequat. Duis aute irure dolor in - reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla - pariatur. Excepteur sint occaecat cupidatat non proident, sunt in - culpa qui officia deserunt mollit anim id est laborum. -
-- Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do - eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim - ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut - aliquip ex ea commodo consequat. Duis aute irure dolor in - reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla - pariatur. Excepteur sint occaecat cupidatat non proident, sunt in - culpa qui officia deserunt mollit anim id est laborum. -
-- Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do - eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim - ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut - aliquip ex ea commodo consequat. Duis aute irure dolor in - reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla - pariatur. Excepteur sint occaecat cupidatat non proident, sunt in - culpa qui officia deserunt mollit anim id est laborum. -
- - -]]>- As partner of the company - yourSHOUTER UG (haftungsbeschränkt) - we publish results of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Westphalia. -
-
-
-
-
-
-
Due to lack of time, this page is still under construction.
-
So, please be a litle more patiance with us...
-A simple Plugin for generating a Database-Schema from Hibernate Mapping-Annotations
-hibernate4-maven-plugin is a plugin for generating a database-schema from your Hibernate-Mappings and create or update your database accordingly. Its main usage is to automatically create and populate a test-database for unit-tests in cooperation with the dbunit-maven-plugin.
- -
-Hibernate comes with the buildin functionality, to automatically create or update the database schema. This functionality is configured in the session-configuraton via the parameter hbm2ddl.auto (see Hibernate Reference Documentation - Chapter 3.4. Optional configuration properties). But doing so is not very wise, because you can easily corrupt or erase your production database, if this configuration parameter slips through to your production environment.
-
-Alternatively, you can run the tools SchemaExport or SchemaUpdate by hand. But that is not very comfortable and being used to maven you will quickly long for a plugin, that does that job automatically for you, when you fire up your test cases. -
-In the good old times, there was the Maven Hibernate3 Plugin, that does this for you. But unfortunatly, this plugin is not compatible with Hibernate 4.x. Since there does not seem to be any successor for the Maven Hibernate3 Plugin and googeling does not help, I decided to write up this simple plugin (inspired by these two articles I found: Schema Export with Hibernate 4 and Maven and Schema generation with Hibernate 4, JPA and Maven). -
- --I hope, the resulting simple to use buletproof hibernate4-maven-plugin is usefull! -
-]]><scanDependencies>none</scanDependencies> in the configuration of the hibernate4-maven-plugin should do the trick.]]>
-import java.beans.PropertyDescriptor;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.beans.BeansException;
-import org.springframework.beans.PropertyValues;
-import org.springframework.beans.factory.BeanCreationException;
-import org.springframework.beans.factory.BeanFactory;
-import org.springframework.beans.factory.NoSuchBeanDefinitionException;
-import org.springframework.beans.factory.support.RootBeanDefinition;
-import org.springframework.context.annotation.CommonAnnotationBeanPostProcessor;
-
-
-/**
- * Swallows all {@link NoSuchBeanDefinitionException}s, and
- * {@link BeanCreationException}s, that might be thrown
- * during autowireing.
- *
- * @author kai@juplo.de
- */
-public class ForgivableCommonAnnotationBeanPostProcessor
- extends
- CommonAnnotationBeanPostProcessor
-{
- private static final Logger log =
- LoggerFactory.getLogger(ForgivableCommonAnnotationBeanPostProcessor.class);
-
- @Override
- protected Object autowireResource(BeanFactory factory, LookupElement element, String requestingBeanName) throws BeansException
- {
- try
- {
- return super.autowireResource(factory, element, requestingBeanName);
- }
- catch (NoSuchBeanDefinitionException e)
- {
- log.warn(e.getMessage());
- return null;
- }
- }
-
- @Override
- public Object postProcessBeforeInitialization(Object bean, String beanName)
- {
- try
- {
- return super.postProcessBeforeInitialization(bean, beanName);
- }
- catch (BeanCreationException e)
- {
- log.warn(e.getMessage());
- return bean;
- }
- }
-}
-
-]]>- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-In this mini-HOWTO, we will configure a simulated network in exact the same way, as Docker does it. -
--Our goal is, to understand how Docker handles virtual networks. -Later (in another post), we will use the gained understanding to simulate segmented multihop networks using Docker-Compose. -
--First, we have to create a bridge, that will act as the switch in our virtual network and bring it up. -
sudo ip link add dev switch type bridge
-sudo ip link set dev switch up
-
--It is crucial, to activate each created device, since new devices are not activated by default. -
--Now we can create a virtual host. -This is done by creating a new network namespace, that will act as the host: -
-sudo ip netns add host_1
--This "virtual host" is not of much use at the moment, because it is not connected to any network, which we will do next... -
--Connecting the host to the network is done with the help of a veth pair: -
sudo ip link add dev host_1 type veth peer name host_if
-
--A veth-pair acts as a virtual patch-cable. -As a real cable, it always has two ends and data that enters one end is copied to the other. -Unlike a real cable, each end comes with a network interface card (nic). -To stick with the metaphor: using a veth-pair is like taking a patch-cable with a nic hardwired to each end and installing these nics. -
- --Some common pitfalls, when - - - - -
# Create a bridge in the standard-networknamespace, that represents the switch
-sudo ip link add dev switch type bridge
-# Bring the bridge up
-sudo ip link set dev switch up
-
-
-# Create a veth-pair for the virtual peer host_1
-sudo ip link add dev host_1 type veth peer name host_if
-# Create a private namespace for host_1 and move the interface host_if into it
-sudo ip netns add host_1
-sudo ip link set dev host_if netns host_1
-# Rename the private interface to eth0
-sudo ip netns exec host_1 ip link set dev host_if name eth0
-# Set the IP for the interface eth0 and bring it up
-sudo ip netns exec host_1 ip addr add 192.168.10.1/24 dev eth0
-sudo ip netns exec host_1 ip link set dev eth0 up
-# Plug the other end into the virtual switch and bring it up
-sudo ip link set dev host_1 master switch
-sudo ip link set dev host_1 up
-
-# Create a veth-pair for the virtual peer host_2
-sudo ip link add dev host_2 type veth peer name host_if
-# Create a private namespace for host_2 and move the interface host_if into it
-sudo ip netns add host_2
-sudo ip link set dev host_if netns host_2
-# Rename the private interface to eth0
-sudo ip netns exec host_2 ip link set dev host_if name eth0
-# Set the IP for the interface eth0 and bring it up
-sudo ip netns exec host_2 ip addr add 192.168.10.2/24 dev eth0
-sudo ip netns exec host_2 ip link set dev eth0 up
-# Plug the other end into the virtual switch and bring it up
-sudo ip link set dev host_2 master switch
-sudo ip link set dev host_2 up
-
-# Create a veth-pair for the virtual peer host_3
-sudo ip link add dev host_3 type veth peer name host_if
-# Create a private namespace for host_3 and move the interface host_if into it
-sudo ip netns add host_3
-sudo ip link set dev host_if netns host_3
-# Rename the private interface to eth0
-sudo ip netns exec host_3 ip link set dev host_if name eth0
-# Set the IP for the interface eth0 and bring it up
-sudo ip netns exec host_3 ip addr add 192.168.10.3/24 dev eth0
-sudo ip netns exec host_3 ip link set dev eth0 up
-# Plug the other end into the virtual switch and bring it up
-sudo ip link set dev host_3 master switch
-sudo ip link set dev host_3 up
-
-# Create a veth-pair for the virtual peer host_4
-sudo ip link add dev host_4 type veth peer name host_if
-# Create a private namespace for host_4 and move the interface host_if into it
-sudo ip netns add host_4
-sudo ip link set dev host_if netns host_4
-# Rename the private interface to eth0
-sudo ip netns exec host_4 ip link set dev host_if name eth0
-# Set the IP for the interface eth0 and bring it up
-sudo ip netns exec host_4 ip addr add 192.168.10.4/24 dev eth0
-sudo ip netns exec host_4 ip link set dev eth0 up
-# Plug the other end into the virtual switch and bring it up
-sudo ip link set dev host_4 master switch
-sudo ip link set dev host_4 up
-
-# Create a veth-pair for the virtual peer host_5
-sudo ip link add dev host_5 type veth peer name host_if
-# Create a private namespace for host_5 and move the interface host_if into it
-sudo ip netns add host_5
-sudo ip link set dev host_if netns host_5
-# Rename the private interface to eth0
-sudo ip netns exec host_5 ip link set dev host_if name eth0
-# Set the IP for the interface eth0 and bring it up
-sudo ip netns exec host_5 ip addr add 192.168.10.5/24 dev eth0
-sudo ip netns exec host_5 ip link set dev eth0 up
-# Plug the other end into the virtual switch and bring it up
-sudo ip link set dev host_5 master switch
-sudo ip link set dev host_5 up
-
-
-]]>-In this usage scenario, two network namespaces (i.e., two virtual hosts) are connected with a virtual patch cable (the veth-pair). -One of the two network namespaces may be the default network namespace, but not both (see Pitfall: Pointless Usage Of Veth-Pairs). -
--Receipt: -
-sudo ip netns add host_1
-sudo ip netns add host_2
-sudo ip link add dev if_1 type veth peer name if_2
-sudo ip link set dev if_1 netns host_1
-sudo ip link set dev if_2 netns host_2
-
-sudo ip netns exec host_1 ip addr add 192.168.111.1/24 dev if_1
-sudo ip netns exec host_1 ip link set dev if_1 up
-sudo ip netns exec host_2 ip addr add 192.168.111.2/24 dev if_2
-sudo ip netns exec host_2 ip link set dev if_2 up
-
-host_2):
-sudo ip netns exec host_1 ip -d addr show
-1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
-904: if_1@if903: mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 7e:02:d1:d3:36:7e brd ff:ff:ff:ff:ff:ff link-netnsid 1 promiscuity 0
- veth
- inet 192.168.111.1/32 scope global if_1
- valid_lft forever preferred_lft forever
- inet6 fe80::7c02:d1ff:fed3:367e/64 scope link
- valid_lft forever preferred_lft forever
-
-sudo ip netns exec host_1 ip route show
-192.168.111.0/24 dev if_1 proto kernel scope link src 192.168.111.1
-
-Note, that all interfaces are numbered and that each end of a veth-pair explicitly states the number of the other end of the pair:
-sudo ip netns exec host_2 ip addr show
-1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
-903: if_2@if904: mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 52:f4:5a:be:dc:9b brd ff:ff:ff:ff:ff:ff link-netnsid 0
- inet 192.168.111.2/24 scope global if_2
- valid_lft forever preferred_lft forever
- inet6 fe80::50f4:5aff:febe:dc9b/64 scope link
- valid_lft forever preferred_lft forever
-
-Here: if_2 with number 903 in the network namespace host_2 states, that its other end has the number 904 — Compare this with the output for the network namespace host_1 above!
-host_2):
-sudo ip netns exec host_1 ping -c2 192.168.111.2
-PING 192.168.111.2 (192.168.111.2) 56(84) bytes of data.
-64 bytes from 192.168.111.2: icmp_seq=1 ttl=64 time=0.066 ms
-64 bytes from 192.168.111.2: icmp_seq=2 ttl=64 time=0.059 ms
-
---- 192.168.111.2 ping statistics ---
-2 packets transmitted, 2 received, 0% packet loss, time 999ms
-rtt min/avg/max/mdev = 0.059/0.062/0.066/0.008 ms
-
-sudo ip netns exec host_1 ping -c2 192.168.111.2
-# And at the same time in another terminal:
-sudo ip netns exec host_1 tcpdump -n -i if_1
-tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
-listening on if_1, link-type EN10MB (Ethernet), capture size 262144 bytes
-^C16:34:44.894396 IP 192.168.111.1 > 192.168.111.2: ICMP echo request, id 14277, seq 1, length 64
-16:34:44.894431 IP 192.168.111.2 > 192.168.111.1: ICMP echo reply, id 14277, seq 1, length 64
-16:34:45.893385 IP 192.168.111.1 > 192.168.111.2: ICMP echo request, id 14277, seq 2, length 64
-16:34:45.893418 IP 192.168.111.2 > 192.168.111.1: ICMP echo reply, id 14277, seq 2, length 64
-
-4 packets captured
-4 packets received by filter
-0 packets dropped by kernel
-
--In this usage scenario, a network namespace (i.e., a virtual host) is connected to a bridge (i.e. a virtual network/switch) with a virtual patch cable (the veth-pair). -The network namespace may be the default network namespace (i.e., the local host). -
-Receipt:
-sudo ip link add dev switch type bridge
-sudo ip netns add host_1
-sudo ip link add dev veth0 type veth peer name link_1
-sudo ip link set dev veth0 netns host_1
-
-You can think of the last step (the three last commands) as plugging the virtual host (the network namespace) into the virtual switch (the bridge) with the help of a patch-cable (the veth-pair).
-sudo ip link set dev switch up
-sudo ip link set dev link_1 master switch
-sudo ip link set dev link_1 up
-sudo ip netns exec host_1 ip addr add 192.168.111.1/24 dev veth0
-sudo ip netns exec host_1 ip link set dev veth0 up
-
--The bridge only needs its own IP, if the network has to be routable (see: Virtual Bridges) -
-sudo ip netns exec host_1 ip -d addr show
-1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
-947: veth0@if946: mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 3e:70:06:77:fa:67 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
- veth
- inet 192.168.111.1/24 scope global veth0
- valid_lft forever preferred_lft forever
- inet6 fe80::3c70:6ff:fe77:fa67/64 scope link
- valid_lft forever preferred_lft forever
-
-sudo ip netns exec host_1 ip route show
-192.168.111.0/24 dev veth0 proto kernel scope link src 192.168.111.1
-
-ping-command.
-There are three ways to achieve this.
-Choose only one!sudo ip addr add 192.168.111.254/24 dev switch
-ping -c2 192.168.111.1
-sudo ip netns exec host_1 ping -c2 192.168.111.254
-
-In this commonly used approach, the kernel sets up all needed routing entries automatically.
-sudo ip netns add host_2
-sudo ip link add dev veth0 type veth peer name link_2
-sudo ip link set dev veth0 netns host_2
-sudo ip link set dev link_2 master switch
-sudo ip link set dev link_2 up
-sudo ip netns exec host_2 ip addr add 192.168.111.2/24 dev veth0
-sudo ip netns exec host_2 ip link set dev veth0 up
-sudo ip netns exec host_2 ping -c2 192.168.111.1
-sudo ip netns exec host_1 ping -c2 192.168.111.2
-
-In this approach, the virtual network is kept separated from the host.
-Only the virtual hosts, that are plugged into the virtual network can reach each other.
-sudo ip link add dev veth0 type veth peer name link_2
-sudo ip link set dev link_2 master switch
-sudo ip link set dev link_2 up
-sudo ip addr add 192.168.111.2/24 dev veth0
-sudo ip link set dev veth0 up
-ping -c2 192.168.111.1
-sudo ip netns exec host_1 ping -c2 192.168.111.2
-
-Strictly speaking, this is a special case of the former approach, where the default network namespace is used instead of a private one.
-veth0 and link_2).
--
-Receipt:
-
-
-
--If you forget to specifiy the prefix-length for one of the addresses, you will not be able to ping the host on the other end of the veth-pair. -
-
-192.168.111.1/24 specifies the address 192.168.111.1 as part of the subnet with the network-mask 255.255.255.0. If you forget the prefix, the address will be interpreted as 192.168.111.1/32 and the kernel will not add a network-route. Hence, you will not be able to ping the other end (192.168.111.2), because the kernel would not know, that it is reachable via the interface that belongs to the address 192.168.111.1.
-
-If you run tcpdump on an interface in the default-namespace, the captured packages show up immediatly.
-I.e.: You can watch the exchange of ICMP-packages live, as it happens.
-But: If you run tcpdump in a named network-namespace, the captured packages will not show up, until you stop the command with CRTL-C!
-
-Do not ask me why — I just witnessed that odd behaviour on my linux-box and found it noteworthy, because I thought, that my setup was not working several times, before I realised, that I had to kill tcpdump to see the captured packages.
-
-This is another reason, why packages might not show up on the virtual interfaces of the configured veth-pair. -Often, veth-pairs are used as a simple example for virtual networking like in the following snippet: -
-sudo ip link add dev if_1 type veth peer name if_2
-sudo ip addr add 192.168.111.1 dev if_1
-sudo ip link set dev if_1 up
-sudo ip addr add 192.168.111.2 dev if_2
-sudo ip link set dev if_2 up
-
-- -Note, that additionally, the prefix was not specified with the given addresses (compare with above)! -This works here, because both interfaces are local, so that the kernel does know how to reach them without any routing information. - -
-The setup is then "validated" with a ping from one address to the other: -
-ping -c 3 -I 192.168.111.1 192.168.111.2
-PING 192.168.111.2 (192.168.111.2) from 192.168.111.1 : 56(84) bytes of data.
-64 bytes from 192.168.111.2: icmp_seq=1 ttl=64 time=0.068 ms
-64 bytes from 192.168.111.2: icmp_seq=2 ttl=64 time=0.079 ms
-64 bytes from 192.168.111.2: icmp_seq=3 ttl=64 time=0.105 ms
-
---- 192.168.111.2 ping statistics ---
-3 packets transmitted, 3 received, 0% packet loss, time 2052ms
-rtt min/avg/max/mdev = 0.068/0.084/0.105/0.015 ms
-
-
-Though it looks like the setup is working as intended, this is not the case:
-The packets are not routed through the virtual network interfaces if_1 and if_2
-
sudo tcpdump -i if_1 -n
-tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
-listening on if_1, link-type EN10MB (Ethernet), capture size 262144 bytes
-^C
-0 packets captured
-0 packets received by filter
-0 packets dropped by kernel
-
--Instead, they show up on the local interface: -
-sudo tcpdump -i lo -n
-tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
-listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes
-12:20:09.899325 IP 192.168.111.1 > 192.168.111.2: ICMP echo request, id 28048, seq 1, length 64
-12:20:09.899353 IP 192.168.111.2 > 192.168.111.1: ICMP echo reply, id 28048, seq 1, length 64
-12:20:10.909627 IP 192.168.111.1 > 192.168.111.2: ICMP echo request, id 28048, seq 2, length 64
-12:20:10.909684 IP 192.168.111.2 > 192.168.111.1: ICMP echo reply, id 28048, seq 2, length 64
-12:20:11.933584 IP 192.168.111.1 > 192.168.111.2: ICMP echo request, id 28048, seq 3, length 64
-12:20:11.933630 IP 192.168.111.2 > 192.168.111.1: ICMP echo reply, id 28048, seq 3, length 64
-^C
-6 packets captured
-12 packets received by filter
-0 packets dropped by kernel
-
--This happens, because the kernel adds entries for both interfaces in the local routing table, since both interfaces are connected to the default network namespace of the host: -
-ip route show table local
-broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
-local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
-local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
-broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
-local 192.168.111.1 dev if_1 proto kernel scope host src 192.168.111.1
-local 192.168.111.2 dev if_2 proto kernel scope host src 192.168.111.2
-
-
-When routing the packages, the kernel looks up this entries and consequently routes the packages through the lo-interface, since both addresses are local addresses.
-
-There is nothing strange or even wrong with this behavior. - -If there is something wrong in this setup, it is the idea to create two connected virtual local interfaces. - -That is equally pointless, as installing two nics into one computer and connecting both cards with a cross-over patch cable... -
-
-Use CommonsRequestLoggingFilter and place it befor the filter, that represents Spring Security.
-
Jump to the configuration details
--If you want to understand the OAuth2-Flow or have to debug any issues involving it, the crucial part about it is the request/response-flow between your application and the provider. -Unfortunately, this -
-spring.security.filter.order=-100
-
-https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#security-properties
-
-https://mtyurt.net/post/spring-how-to-insert-a-filter-before-springsecurityfilterchain.html
-
-https://spring.io/guides/topicals/spring-security-architecture#_web_security
-logging.level.org.springframework.web.filter.CommonsRequestLoggingFilter=DEBUG
-
-@Bean
-public FilterRegistrationBean requestLoggingFilter()
-{
- CommonsRequestLoggingFilter loggingFilter = new CommonsRequestLoggingFilter();
-
- loggingFilter.setIncludeClientInfo(true);
- loggingFilter.setIncludeQueryString(true);
- loggingFilter.setIncludeHeaders(true);
- loggingFilter.setIncludePayload(true);
- loggingFilter.setMaxPayloadLength(64000);
-
- FilterRegistrationBean reg = new FilterRegistrationBean(loggingFilter);
- reg.setOrder(-101); // Default for spring.security.filter.order is -100
- return reg;
-}
-]]>TODO
-mockbar — machen2020/03/06 14:31:20 [emerg] 1#1: host not found in upstream "app:8080" in /etc/nginx/conf.d/proxy.conf:2
-nginx: [emerg] host not found in upstream "app:8080" in /etc/nginx/conf.d/proxy.conf:2
-
-
-
-
-
-]]>-The outbox is represented by an additionally table in the database, thate takes part in the transaction. -All messages, that should be send if and only if the transaction is sucessfully completed, are stored in this table. -The sending of this messages is thus postponed after the transaction is completed. -
--If the table is read outside of the transaction context, only entries related to sucessfully commited transactions are visible. -These entries can then be read and queued for sending. -If the entries are only removed from the outbox-table after a successful transmission has been confirmed by the messaging middleware, no messages can be lost. -
-The biggest drawback of the Outbox-Pattern is the postponent of all messages, that are send as part of a transaction after the completion of the transaction. -This changes the order in which the messages are sent. -
-
-Messages B1 and B2 of a transaction B, that started after a transation A will be sent before messages A1 and A2, that belong to transaction A, if transaction B completes before transaction A, even if the recording of messages A1 and A2 happend before the recording of messages B1 and B2. -This happens, because all messages, that are written in transaction A will only become visible to the processing of the messages, after the completion of the transaction, because the processing of the messaging happens outside of the scope of the transaction. -Therefore, the commit-order dictates the order, in which messages are sent. -
-]]>-Based on a very simple example-project -we will implemnt the Outbox-Pattern with Kafka. -
--In this part, we will add a first simple version of the logic, that is needed to poll the outbox-table and send the found entries as messages into a Apache Kafka topic. -
]]>- hibernate4-maven-plugin is now available in the Central Maven Repository -
-- That means, that you now can use it without manually downloading and adding it to your local repository. -
-
- Simply define it in your plugins-section...
-
-
- de.juplo
- hibernate4-maven-plugin
- 1.0
-
-
-- ...and there you go! -
-Appart from two bugfixes, this version includes some minor improvements, which might come in handy for you.
--hibernate4-maven-plugin 1.0.1 should be available in the Central Maven Repository in a few hours. -
-
-commit 4b507b15b0122ac180e44b8418db8d9143ae9c3a
-Author: Kai Moritz
-Date: Tue Jan 15 23:09:01 2013 +0100
-
- Reworked documentation: splited and reorderd pages and menu
-
-commit 65bbbdbaa7df1edcc92a3869122ff06a3895fe57
-Author: Kai Moritz
-Date: Tue Jan 15 22:39:39 2013 +0100
-
- Added breadcrumb to site
-
-commit a8c4f4178a570da392c94e384511f9e671b0d040
-Author: Kai Moritz
-Date: Tue Jan 15 22:33:48 2013 +0100
-
- Added Google-Analytics tracking-code to site
-
-commit 1feb1053532279981a464cef954072cfefbe01a5
-Author: Kai Moritz
-Date: Tue Jan 15 22:21:54 2013 +0100
-
- Added release information to site
-
-commit bf5e8c39287713b9eb236ca441473f723059357a
-Author: Kai Moritz
-Date: Tue Dec 18 00:14:08 2012 +0100
-
- Reworked documentation: added documentation for new features etc.
-
-commit 36af74be42d47438284677134037ce399ea0b58e
-Author: Kai Moritz
-Date: Tue Jan 15 10:40:09 2013 +0100
-
- Test-Classes can now be included into the scanning for Hibernate-Annotations
-
-commit bcf07578452d7c31dc97410bc495c73bd0f87748
-Author: Kai Moritz
-Date: Tue Jan 15 09:09:05 2013 +0100
-
- Bugfix: database-parameters for connection were not taken from properties
-
- The hibernate-propertiesfile was read and used for the configuration of
- the SchemaExport-class, but the database-parameters from these source were
- ignored, when the database-connection was opened.
-
-commit 54b22b88de40795a73397ac8b3725716bc80b6c4
-Author: Kai Moritz
-Date: Wed Jan 9 20:57:22 2013 +0100
-
- Bugfix: connection was closed, even when it was never created
-
- Bugreport from: Adriano Machado
-
- When only the script is generated and no export is executed, no database-
- connection is opend. Nevertheless, the code tried to close it in the
- finally-block, which lead to a NPE.
-
-commit b9ab24b21d3eb65e2a2208be658ff447c1846894
-Author: Kai Moritz
-Date: Tue Dec 18 00:31:22 2012 +0100
-
- Implemented new parameter "force"
-
- If -Dhibernate.export.force is specified, the schema-export will be forced.
-
-commit 19740023bb37770ad8e08c8e50687cb507e2fbfd
-Author: Kai Moritz
-Date: Fri Dec 14 02:16:44 2012 +0100
-
- Plugin ignores upper- or lower-case mismatches for "type" and "target"
-
-commit 8a2e08b6409034fd692c4bea72058f785e6802ad
-Author: Kai Moritz
-Date: Fri Dec 14 02:13:05 2012 +0100
-
- The Targets EXPORT and NONE force excecution
-
- Otherwise, an explicitly requestes SQL-export or mapping-test-run would be
- skipped, if no annotated class was modified.
-
- If the export is skipped, this is signaled via the maven-property
- hibernate.export.skipped.
-
- Refactored name of the skip-property to an public final static String
-
-commit 55a33e35422b904b974a19d3d6368ded60ea1811
-Author: Kai Moritz
-Date: Fri Dec 14 01:43:45 2012 +0100
-
- Configuration via properties reworked
-
- * export-type and -target are now also configurable via properties
- * schema-filename, -delemiter and -format are now also configurable via
- porperties
-
-commit 5002604d2f9024dd7119190915b6c62c75fbe1d6
-Author: Kai Moritz
-Date: Thu Dec 13 16:19:55 2012 +0100
-
- schema is now rebuild, when SQL-dialect changes
-
-commit a2859d3177a64880ca429d4dfd9437a7fb78dede
-Author: Kai Moritz
-Date: Tue Dec 11 17:30:19 2012 +0100
-
- Skipping of unchanged scenarios is now based on MD5-sums of all classes
-
- When working with Netbeans, the schema was often rebuild without need.
- The cause of this behaviour was, that Netbeans (or Maven itself) sometimes
- touches unchanged classes. To avoid this, hibernat4-maven-plugin now
- calculates MD5-sums for all annotated classes and compares these instead of
- the last-modified value.
-
-commit a4de03f352b21ce6abad570d2753467e3a972a10
-Author: Kai Moritz
-Date: Tue Dec 11 17:02:14 2012 +0100
-
- hibernate4:export is skipped, when annotated classes are unchanged
-
- Hbm2DdlMojo now checks the last-modified-timestamp of all found annotated
- classes and aborts the schema-generation, when no class has changed and no
- new class was added since the last execution.
-
- It then sets a maven-property, to indicate to other plugins, that the
- generation was skipped.
-
-commit 2f3807b9fbde5c1230e3a22010932ddec722871b
-Author: Kai Moritz
-Date: Thu Nov 29 18:23:59 2012 +0100
-
- Found annotated classes get logged now
-
-
-]]>This release includes:
-hibernateNamingStrategy-configuration-option (thanks to Lorenzo Nicora)*.hbm.xml-files (old approach without annotations)-hibernate4-maven-plugin 1.0.2 is available in the Central Maven Repository. -
-
-commit 4edef457d2b747d939a141de24bec5e32abbc0c7
-Author: Kai Moritz
-Date: Fri Aug 2 00:37:40 2013 +0200
-
- Last preparations for release
-
-commit 82eada1297cdc295dcec9f43660763a04c1b1deb
-Author: Kai Moritz
-Date: Fri Aug 2 00:37:22 2013 +0200
-
- Upgrade to Hibernate 4.2.3.Final
-
-commit 3d355800b5a5d2a536270b714f37a84d50b12168
-Author: Kai Moritz
-Date: Thu Aug 1 12:41:06 2013 +0200
-
- Mapping-configurations are opend as given before searched in resources
-
-commit 1ba817af3ae5ab23232fca001061f8050cecd6a7
-Author: Kai Moritz
-Date: Thu Aug 1 01:45:22 2013 +0200
-
- Improved documentaion (new FAQ-entries)
-
-commit 02312592d27d628cc7e0d8e28cc40bf74a80de21
-Author: Kai Moritz
-Date: Wed Jul 31 23:07:26 2013 +0200
-
- Added support for mapping-configuration through mapping-files (*.hbm.xml)
-
-commit b6ac188a40136102edc51b6824875dfb07c89955
-Author: nicus
-Date: Fri Apr 19 15:27:21 2013 +0200
-
- Fixed problem with NamingStrategy (contribution from Lorenzo Nicora)
-
- * NamingStrategy is set explicitly on Hibernate Configuration (not
- passed by properties)
- * Added 'hibernateNamingStrategy' configuration property
-
-commit c2135b5dedc55fc9e3f4dd9fe53f8c7b4141204c
-Author: Kai Moritz
-Date: Mon Feb 25 22:35:33 2013 +0100
-
- Integration of the maven-plugin-plugin for automated helpmojo-generation
-
- Thanks to Adriano Machado, who contributed this patch!
-
-
-]]>
-So, here we go:
-Just add the @Parent-annotation to the attribute of your associated @Embeddable-class, that points back to its parent.
-
-@Entity
-class Cat
-{
- @Id
- Long id;
-
- @ElementCollection
- Set kittens;
-
- ...
-}
-
-@Embeddable
-class Kitten
-{
- // Embeddable's have no ID-property!
-
- @Parent
- private Cat mother;
-
- ...
-}
-
--But this clean approach has a drawback: it only works with hibernate. If you work with other JPA-implementations or plain old JPA itself, it will not work. Hence, it will not work in googles appengine, for example! -
-
-Unfortunatly, there are no clean workarounds, to get bidirectional associations to @ElementCollections's working with JPA. The only workarounds I found, only work for directly embedded instances - not for collections of embedded instances:
-
@Embedded to a getter/setter pair rather than to the member itself (found on stackoverflow.com).- -If you want bidirectiona associations to the elements of your embedded collection, it works only with hibernate! - -
]]>mvn appengine:update, like me yesterday, you surely wondering, how to logout from maven-appengine-plugin.
-
--maven-appengine-plugin somehow miracolously stores your credentials for you, when you attemp to upload an app for the first time. This comes in very handy, if you work with just one google-account. But it might get a "pain-in-the-ass", if you work with several accounts. Because, if you once logged in into an account, there is no way (I mean: no goal of the maven-appengine-plugin) to log out, in order to change the account! -
-
-Only after hard googling, i found a solution to this problem in a blog-post: maven-appengine-plugin stores its oauth2-credentials in the file .appcfg_oauth2_tokens_java in your home directory (on Linux - sorry Windows-folks, you have to figure out yourself, where the plugin stores the credentials on Windows).
-
-Just delete the file .appcfg_oauth2_tokens_java and your logged out! The next time you call mvn appengine:upload you will be asked again to accept the request and, hence, can switch accounts. If you are not using oauth2, just look for .appcfg*-files in your home directory. I am sure, you will find another file with stored credentials, that you can delet to logout, like Radomir, who deleted .appcfg_cookiesy to log out.
-
-This release of the plugin now supports scanning of dependencies. By default all dependencies in the scope compile are scanned for annotated classes. Thanks to Guido Wimmel, who pointed out, that this was really missing and supported the implementation with a little test-project for this use-case. Learn more...
-
-Another new feature of this release is support for Hibernate Envers - Easy Entity Auditing. Thanks a lot to Victor Tatai, how implemented this, and Erik-Berndt Scheper, who helped integrating it and who supported the testin with a little test-project, that demonstrates the new feature. You can visit it at bitbucket as a starting point for your own experiments with this technique. -
--Many thanks also to Stephen Johnson and Eduard Szente, who pointed out bugs and helped eleminating them... -
-
-hibernate4-maven-plugin 1.0.3 is available in the Central Maven Repository. -
-
-commit adb20bc4da63d4cec663ca68648db0f808e3d181
-Author: Kai Moritz
-Date: Fri Oct 18 01:52:27 2013 +0200
-
- Added missing documentation for skip-configuration
-
-commit 99a7eaddd1301df0d151f01791e3d177297670aa
-Author: Kai Moritz
-Date: Fri Oct 18 00:38:29 2013 +0200
-
- Added @since-Annotation to configuration-parameters
-
-commit 221d977368ee1897377f80bfcdd50dcbcd1d4b83
-Author: Kai Moritz
-Date: Wed Oct 16 01:18:53 2013 +0200
-
- The plugin now scans for annotated classes in dependencies too
-
-commit ef1233a6095a475d9cdded754381267c5d1e336f
-Author: Kai Moritz
-Date: Wed Oct 9 21:37:58 2013 +0200
-
- Project-Documentation now uses the own skin juplo-skin
-
-commit 84e8517be79d88d7e2bec2688a8f965f591394bf
-Author: Kai Moritz
-Date: Wed Oct 9 21:30:28 2013 +0200
-
- Reworked APT-Documentation: page-titles were missing
-
-commit f27134cdec6c38b4c8300efb0bb34fc8ed381033
-Author: Kai Moritz
-Date: Wed Oct 9 21:29:30 2013 +0200
-
- maven-site-plugin auf Version 3.3 aktualisiert
-
-commit d38b2386641c7ca00f54d69cb3f576c20b0cdccc
-Author: Kai Moritz
-Date: Wed Sep 18 23:59:13 2013 +0200
-
- Reverted to old behaviour: export is skipped, when maven.test.skip=true
-
-commit 7d935b61a3d80260b9cacf959984e14708c3a96b
-Author: Kai Moritz
-Date: Wed Sep 18 18:15:38 2013 +0200
-
- No configuration for hibernate.dialect might be a valid configuration too
-
-commit caa492b70dc1daeaef436748db38df1c19554943
-Author: Kai Moritz
-Date: Wed Sep 18 18:14:54 2013 +0200
-
- Improved log-messages
-
-commit 2b1147d5e99c764c1f6816f4d4f000abe260097c
-Author: Kai Moritz
-Date: Wed Sep 18 18:10:32 2013 +0200
-
- Variable "envers" should not be put into hibernate.properties
-
- "hibernate.exoprt.envers" is no Hibernate-Configuration-Parameter.
- Hence, it should not be put into the hibernate.properties-file.
-
-commit 0a52dca3dd6729b8b6a43cc3ef3b69eb22755b0a
-Author: Erik-Berndt Scheper
-Date: Tue Sep 10 16:18:47 2013 +0200
-
- Rename envers property to hibernate.export.envers
-
-commit 0fb85d6754939b2f30ca4fc18823c5f7da1add31
-Author: Erik-Berndt Scheper
-Date: Tue Sep 10 08:20:23 2013 +0200
-
- Ignore IntelliJ project files
-
-commit e88830c968c1aabc5c32df8a061a8b446c26505c
-Author: Victor Tatai
-Date: Mon Feb 25 16:23:29 2013 -0300
-
- Adding envers support (contribution from Victor Tatai)
-
-commit e59ac1191dda44d69dfb8f3afd0770a0253a785c
-Author: Kai Moritz
-Date: Tue Sep 10 20:46:55 2013 +0200
-
- Added Link to old Version 1.0.2 in documentation
-
-commit 97a45d03e1144d30b90f2f566517be22aca39358
-Author: Kai Moritz
-Date: Tue Sep 10 20:29:15 2013 +0200
-
- Execution is only skipped, if explicitly told so
-
-commit 8022611f93ad6f86534ddf3568766f88acf863f3
-Author: Kai Moritz
-Date: Sun Sep 8 00:25:51 2013 +0200
-
- Upgrade to Scannotation 1.0.3
-
-commit 9ab53380a87c4a1624654f654158a701cfeb0cae
-Author: Kai Moritz
-Date: Sun Sep 8 00:25:02 2013 +0200
-
- Upgrade to Hibernate 4.2.5.Final
-
-commit 5715c7e29252ed230389cfce9c1a0376fec82813
-Author: Kai Moritz
-Date: Sat Aug 31 09:01:43 2013 +0200
-
- Fixed failure when target/classes does not exist when runnin mvn test phase
-
- Thanks to Stephen Johnson
-
- Details from the original email:
- ---------
- The following patch stops builds failing when target/classes (or no main java exists), and target/test-classes and src/tests exist.
-
- So for example calling
-
- mvn test -> invokes compiler:compile and if you have export bound to process-classes phase in executions it will fail. Maybe better to give info and carry on. Say for example they want to leave the executions in place that deal with process-classes and also process-test-classes but they do not want it to fail if there is no java to annotate in src/classes. The other way would be to comment out the executions bound to process-classes. What about export being bound to process-class by default? Could this also cause issues?
-
- In either case I think the plugin code did checks for src/classes directory existing, in which case even call "mvn test" would fail as src/classes would not exist as no java existed in src/main only in src/test. Have a look through the patch and see if its of any use.
-
-commit 9414e11c9ffb27e195193f5fa53c203c6297c7a4
-Author: Kai Moritz
-Date: Sat Aug 31 11:28:51 2013 +0200
-
- Improved log-messages
-
-commit da0b3041b8fbcba6175d05a2561b38c365111ed8
-Author: Kai Moritz
-Date: Sat Aug 31 08:51:03 2013 +0200
-
- Fixed NPE when using nested classes in entities with @EmbeddedId/@Embeddable
-
- Patch supplied by Eduard Szente
-
- Details:
- ----------------
- Hi,
-
- when using your plugin for schema export the presence of nested classes
- in entities (e.g. when using @EmbeddedId/@Embeddable and defining the Id
- within the target entity class)
- yields to NPEs.
-
- public class Entity {
-
- @EmbeddedId
- private Id id;
-
- @Embeddable
- public static class Id implements Serializable {
- ....
- }
-
- }
-
- Entity.Id.class.getSimplename == "Id", while the compiled class is named
- "Entity$Id.class"
-
- Patch appended.
-
- Best regards,
- Eduard
-
- ]]>You cannot do both, use the Client-side mode of LESS to ease development and use the lesscss-maven-plugin to automatically compile the LESS-sources into CSS for production. That does not work, because your stylesheets must be linked in different ways if you are switching between the client-side mode - which is best for development - and the pre-compiled mode - which is best for production. For the client-side mode you need something like:
-
-
-<link rel="stylesheet/less" type="text/css" href="styles.less" />
-<script src="less.js" type="text/javascript"></script>
-
-
-While, for the pre-compiled mode, you want to link to your stylesheets as usual, with:
-
-
-<link rel="stylesheet" type="text/css" href="styles.css" />
-
-
-
-While looking for a solution to this dilemma, I stumbled accross wro4j. Originally intended, to speed up page-delivery by combining and minimizing multiple resources into one through the use of a servlet-filter, this tool also comes with a maven-plugin, that let you do the same offline, while compiling your webapp.
- -The idea is, to use the wro4j-maven-plugin to compile and combine your LESS-sources into CSS for production and to use the wro4j filter, to dynamically deliver the compiled CSS while developing. This way, you do not have to alter your HTML-code, when switching between development and production, because you always link to the CSS-files.
- -So, lets get dirty!
- -First, we configure wro4j, like as we want to use it to speed up our page. The details are explained and linked on wro4j's Getting-Started-Page. In short, we just need two files: wro.xml and wro.properties.
-wro.xml tells wro4j, which resources should be combined and how the result should be named. I am using the following configuration to generate all LESS-Sources beneath base/ into one CSS-file called base.css:
-
-<groups xmlns="http://www.isdc.ro/wro">
- <group name="base">
- <css>/less/base/*.less</css>
- </group>
-
-
-wro4j looks for /less/base/*.less inside the root of the web-context, which is equal to src/main/webapp in a normal maven-project. There are other ways to specifie the resources, which enable you to store them elswhere. But this approach works best for our goal, because the path is understandable for both: the wro4j servlet-filter, we are configuring now for our development-environment, and the wro4j-maven-plugin, that we will configure later for build-time compilation.
wro.properties in short tells wro4j, how or if it should convert the combined sources and how it should behave. I am using the following configuration to tell wro4j, that it should convert *.less-sources into CSS and do that on every request:
-
-managerFactoryClassName=ro.isdc.wro.manager.factory.ConfigurableWroManagerFactory
-preProcessors=cssUrlRewriting,lessCssImport
-postProcessors=less4j
-disableCache=true
-
-
-First of all we specify the ConfigurableWroManagerFactory, because otherwise, wro4j would not pick up our pre- and post-processor-configuration. This is a little bit confusing, because wro4j is already reading the wro.properties-file - otherwise wro4j would never detect the managerFactoryClassName-directive - and you might think: "Why? He is already interpreting our configuration!" But belive me, he is not! You can read more about that in wro4j's documentation. The disableCache=true is also crucial, because otherwise, we would not see the changes take effect when developing with jetty-maven-plugin later on. The pre-processors lessCssImport and cssUrlRewriting merge together all our LESS-resources under /less/base/*.less and do some URL-rewriting, in case you have specified paths to images, fonts or other resources inside your LESS-code, to reflect that the resulting CSS is found under /css/base.css and not /css/base/YOURFILE.css like the LESS-resources.
You can do much more with your resources here, for example minimizing. Also, there are countless configuration options to fine-tune the behaviour of wro4j. But for our goal, we are now only intrested in the compilation of our LESS-sources.
- - -Configuring the filter in the web.xml is easy. It is explained in wro4j's installation-insctuctions. But the trick is, that we do not want to configure that filter for the production-version of our webapp, because we want to compile the resources offline, when the webapp is build. To acchieve this, we can use the <overrideDescriptor>-Parameter of the jetty-maven-plugin.
This parameter lets you specify additional configuration options for the web.xml of your webapp. I am using the following configuration for my jetty-maven-plugin:
-
-
-<plugin>
- <groupId>org.eclipse.jetty</groupId>
- <artifactId>jetty-maven-plugin</artifactId>
- <configuration>
- <webApp>
- <overrideDescriptor>${project.basedir}/src/test/resources/jetty-web.xml</overrideDescriptor>
- </webApp>
- </configuration>
- <dependencies>
- <dependency>
- <groupId>ro.isdc.wro4j</groupId>
- <artifactId>wro4j-core</artifactId>
- <version>${wro4j.version}</version>
- </dependency>
- <dependency>
- <groupId>ro.isdc.wro4j</groupId>
- <artifactId>wro4j-extensions</artifactId>
- <version>${wro4j.version}</version>
- <exclusions>
- <exclusion>
- <groupId>javax.servlet</groupId>
- <artifactId>servlet-api</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.apache.commons</groupId>
- <artifactId>commons-lang3</artifactId>
- </exclusion>
- <exclusion>
- <groupId>commons-io</groupId>
- <artifactId>commons-io</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.springframework</groupId>
- <artifactId>spring-web</artifactId>
- </exclusion>
- <exclusion>
- <groupId>com.google.code.gson</groupId>
- <artifactId>gson</artifactId>
- </exclusion>
- <exclusion>
- <groupId>com.google.javascript</groupId>
- <artifactId>closure-compiler</artifactId>
- </exclusion>
- <exclusion>
- <groupId>com.github.lltyk</groupId>
- <artifactId>dojo-shrinksafe</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.jruby</groupId>
- <artifactId>jruby-core</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.jruby</groupId>
- <artifactId>jruby-stdlib</artifactId>
- </exclusion>
- <exclusion>
- <groupId>me.n4u.sass</groupId>
- <artifactId>sass-gems</artifactId>
- </exclusion>
- <exclusion>
- <groupId>nz.co.edmi</groupId>
- <artifactId>bourbon-gem-jar</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.codehaus.gmaven.runtime</groupId>
- <artifactId>gmaven-runtime-1.7</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.webjars</groupId>
- <artifactId>jshint</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.webjars</groupId>
- <artifactId>emberjs</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.webjars</groupId>
- <artifactId>handlebars</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.webjars</groupId>
- <artifactId>coffee-script</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.webjars</groupId>
- <artifactId>jslint</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.webjars</groupId>
- <artifactId>json2</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.webjars</groupId>
- <artifactId>jquery</artifactId>
- </exclusion>
- </exclusions>
- </dependency>
- </dependencies>
-</plugin>
-
-The dependencies to wro4j-core and wro4j-extensions are needed by jetty, to be able to enable the filter defined below. Unfortunatly, one of the transitive dependencies of wro4j-extensions triggers an uggly error when running the jetty-maven-plugin. Therefore, all unneeded dependencies of wro4j-extensions are excluded, as a workaround for this error/bug.
And my jetty-web.xml looks like this:
-
-
-<?xml version="1.0" encoding="UTF-8"?>
-<web-app xmlns="http://java.sun.com/xml/ns/javaee"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
- version="2.5">
- <filter>
- <filter-name>wro</filter-name>
- <filter-class>ro.isdc.wro.http.WroFilter</filter-class>
- </filter>
- <filter-mapping>
- <filter-name>wro</filter-name>
- <url-pattern>*.css</url-pattern>
- </filter-mapping>
-</web-app>
-
-
-The filter processes any URI's that end with .css. This way, the wro4j servlet-filter makes base.css available under any path, because for exampl /base.css, /css/base.css and /foo/bar/base.css all end with .css.
This is all, that is needed to develop with dynamically reloadable compiled LESS-resources. Just fire up your browser and browse to /what/you/like/base.css. (But do not forget to put some LESS-files in src/main/webapp/less/base/ first!)
All that is left over to configure now, is the build-process. If you would build and deploy your webapp now, the CSS-file base.css would not be generated and the link to your stylesheet, that already works in our jetty-maven-plugin environment would point to a 404. Hence, we need to set up the wro4j-maven-plugin. I am using this configuration:
-
-<plugin>
- <groupId>ro.isdc.wro4j</groupId>
- <artifactId>wro4j-maven-plugin</artifactId>
- <version>${wro4j.version}</version>
- <configuration>
- <wroManagerFactory>ro.isdc.wro.maven.plugin.manager.factory.ConfigurableWroManagerFactory</wroManagerFactory>
- <cssDestinationFolder>${project.build.directory}/${project.build.finalName}/css/</cssDestinationFolder>
- </configuration>
- <executions>
- <execution>
- <phase>prepare-package</phase>
- <goals>
- <goal>run</goal>
- </goals>
- </execution>
- </executions>
-</plugin>
-
-
-I connected the run-goal with the package-phase, because the statically compiled CSS-file is needed only in the final war. The ConfigurableWroManagerFactory tells wro4j, that it should look up further configuration options in our wro.properties-file, where we tell wro4j, that it should compile our LESS-resources. The <cssDestinationFolder>-tag tells wro4j, where it should put the generated CSS-file. You can adjust that to suite your needs.
That's it: now the same CSS-file, which is created on the fly by the wro4j servlet-filter when using mvn jetty:run and, thus, enables dynamic reloading of our LESS-resources, is generated during the build-process by the wro4j-maven-plugin.
If you already compile your LESS-resources with the lesscss-maven-plugin, you can stick with it and skip step 3. But I strongly recommend giving wro4j-maven-plugin a try, because it is a much more powerfull tool, that can speed up your final webapp even more.
- -With a configuration like the above one, your LESS-resources and wro4j-configuration-files will be packed into your production-war. That might be confusing later, because neither wro4j nor LESS is used in the final war. You can add the following to your pom.xml to exclude these files from your war for the sake of clarity:
-
-<plugin>
- <artifactId>maven-war-plugin</artifactId>
- <configuration>
- <warSourceExcludes>
- WEB-INF/wro.*,
- less/**
- </warSourceExcludes>
- </configuration>
-</plugin>
-
-
-
-We only scrached the surface, of what can be done with wro4j. Based on this configuration, you can easily enable additional features to fine-tune your final build for maximum speed. You really should take a look at the list of available Processors!
]]>Recently, I bought myself the Hama 00054807 Internet TV Stick. This stick is a low-budget option, to pimp your TV, if it has a HDMI-port, but no built in smart-tv functionality (or a crapy one). You just plug in the stick and connect its dc-port to a USB-port of the TV (or the included adapter) and there you go.
- -But one big drawback of the Hama 00054807 is, that there are nearly no usefull apps preinstalled and Google forbidds Hama to install the originalGoogle Play Store on the device. Hence, you are locked out of any easy access to all the apps, that constitute the usability of android.
Because of that, I decided to root my Hama00054807 as a first step on the way to fully utilize this neat little toy of mine.
I began with opening the device and found the device-ID B.AML8726.6B 12122. But there seems to be no one else, who ever tried it. But as it turned out, it is fairly easy, because stock recovery is not locked and so you can just install everything you want.
Screenshot of the stock recovery installed on the Hama 00054807 Intetnet TV Stick[/caption]
-
-I found out, that you can boot into recovery, by pressing the reset-button, while the stick is booting. You can reach the reset-button without the need to open the case through a little hole in the back of the device. Just hold the button pressed, until recovery shows up (see screenshot).
- -Unfortunatly, the keyboard does not work, while you are in recovery-mode. So at first glance, you can do nothing, expect looking at the nice picture of the android-bot being repaired.
- -But I found out, that you can control stock recovery with the help of a file called factory_update_param.aml, which is read from the external sd-card and interpreted by stock recovery on startup. Just create a text-file with the following content (I think it should use unix stle newlines, aka LF):
-
---update_package=/sdcard/update.zip
-
-
-Place this file on the sd-card and name it factory_update_param.aml. Now you can place any suitable correctly signed android-update on the sd-card and rename it to update.zip and stock recovery will install it upon boot, if you boot into recovery with the sd-card inserted.
If you want to wipe all data as well and factory reset your device, you can extend factory_update_param.aml like this:
-
---update_package=/sdcard/update.zip
---wipe_data
---wipe_cache
---wipe_media
-
-But be carefull to remove these extra-lines later, because they are executed every time you boot into recovery with the sd-card inserted! You have been warned :)
- -So, actually rooting the device is fairly easy now. You just have to download any correclty signed Superuser-Update. For example this one from the superuser homepage: Superuser-3.1.3-arm-signed.zip. Then, put it on the sd-card, rename it to update.zip, boot into recovery with the sd-card inserted and that's it, you'r root!
If you reboot your device, you should now find the superuser-app among your apps. To verify, that everything went right, you could install any app that requires root-privileges. If the app requests root-privileges, you should see a dialog from the superuser-app, that asks you if the privileges should be granted, or not. For example, you can install a terminal-app and type su and hit return to request root-privileges.
So now your device is rooted and you are prepared to install custom updates on it. But still the Google Play Store is missing. I hope I will find some time to accomplish that, too. Stay tuned!
]]>If you do not want to know why it does not work and how I fixed it, just jump to the quick fix!
-With Jetty 9.0.x the configuration of the jetty-maven-plugin (formaly known as maven-jetty-plugin) has changed dramatically. Since then, it is no more possible to configure a HTTPS-Connector in the plugin easily. Normally, connecting your development-container via HTTPS was not often necessary. But since Snowden, encryption is on everybodys mind. And so, testing the encrypted part of your webapp becomes more and more important.
jetty-maven-plugin 9.0.xA bug-report stats, that
-
-Since the constructor signature changed for Connectors in jetty-9 to require the Server instance to be passed into it, it is no longer possible to configure Connectors directly with the plugin (because maven requires no-arg constructor for any <configuration> elements).
-
The documentation includes an example, how to configure a HTTPS Connector with the help of a jetty.xml-file. But unfortunatly, this example is broken. Jetty refuses to start with the following error: [ERROR] Failed to execute goal org.eclipse.jetty:jetty-maven-plugin:9.0.5.v20130815:run (default-cli) on project FOOBAR: Failure: Unknown configuration type: New in org.eclipse.jetty.xml.XmlConfiguration@4809f93a -> [Help 1].
So, here is, what you have to do to fix this broken example: the content shown for the file jetty.xml in the example is wrong. It has to look like the other example-files. That is, ith has to start with a <Configure>-tag. The corrected content of the file looks like this:
-<?xml version="1.0"?>
-<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure_9_0.dtd">
-
-<!-- ============================================================= -->
-<!-- Configure the Http Configuration -->
-<!-- ============================================================= -->
-<Configure id="httpConfig" class="org.eclipse.jetty.server.HttpConfiguration">
- <Set name="secureScheme">https</Set>
- <Set name="securePort"><Property name="jetty.secure.port" default="8443" /></Set>
- <Set name="outputBufferSize">32768</Set>
- <Set name="requestHeaderSize">8192</Set>
- <Set name="responseHeaderSize">8192</Set>
- <Set name="sendServerVersion">true</Set>
- <Set name="sendDateHeader">false</Set>
- <Set name="headerCacheSize">512</Set>
-
- <!-- Uncomment to enable handling of X-Forwarded- style headers
- <Call name="addCustomizer">
- <Arg><New class="org.eclipse.jetty.server.ForwardedRequestCustomizer"/></Arg>
- </Call>
- -->
-</Configure>
-
-If you are getting the error [ERROR] Failed to execute goal org.eclipse.jetty:jetty-maven-plugin:9.0.5.v20130815:run (default-cli) on project FOOBAR: Failure: etc/jetty.keystore (file or directory not found) -> [Help 1] now, this is because you have to create/get a certificate for your HTTPS-Connector. For development, a selfsigned certificate is sufficient. You can easily create one like back in the good old maven-jetty-plugin-times, with this command: keytool -genkey -alias jetty -keyalg RSA -keystore src/test/resources/jetty.keystore -storepass secret -keypass secret -dname "CN=localhost". Just be sure, to change the example file jetty-ssl.xml, to reflect the path to your new keystore file and password. Your jetty-ssl.xml should look like:
-<?xml version="1.0"?>
-<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure_9_0.dtd">
-
-<!-- ============================================================= -->
-<!-- Configure a TLS (SSL) Context Factory -->
-<!-- This configuration must be used in conjunction with jetty.xml -->
-<!-- and either jetty-https.xml or jetty-spdy.xml (but not both) -->
-<!-- ============================================================= -->
-<Configure id="sslContextFactory" class="org.eclipse.jetty.util.ssl.SslContextFactory">
- <Set name="KeyStorePath"><Property name="jetty.base" default="." />/<Property name="jetty.keystore" default="src/test/resources/jetty.keystore"/></Set>
- <Set name="KeyStorePassword"><Property name="jetty.keystore.password" default="secret"/></Set>
- <Set name="KeyManagerPassword"><Property name="jetty.keymanager.password" default="secret"/></Set>
- <Set name="TrustStorePath"><Property name="jetty.base" default="." />/<Property name="jetty.truststore" default="src/test/resources/jetty.keystore"/></Set>
- <Set name="TrustStorePassword"><Property name="jetty.truststore.password" default="secret"/></Set>
- <Set name="EndpointIdentificationAlgorithm"></Set>
- <Set name="ExcludeCipherSuites">
- <Array type="String">
- <Item>SSL_RSA_WITH_DES_CBC_SHA</Item>
- <Item>SSL_DHE_RSA_WITH_DES_CBC_SHA</Item>
- <Item>SSL_DHE_DSS_WITH_DES_CBC_SHA</Item>
- <Item>SSL_RSA_EXPORT_WITH_RC4_40_MD5</Item>
- <Item>SSL_RSA_EXPORT_WITH_DES40_CBC_SHA</Item>
- <Item>SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA</Item>
- <Item>SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA</Item>
- </Array>
- </Set>
-
- <!-- =========================================================== -->
- <!-- Create a TLS specific HttpConfiguration based on the -->
- <!-- common HttpConfiguration defined in jetty.xml -->
- <!-- Add a SecureRequestCustomizer to extract certificate and -->
- <!-- session information -->
- <!-- =========================================================== -->
- <New id="sslHttpConfig" class="org.eclipse.jetty.server.HttpConfiguration">
- <Arg><Ref refid="httpConfig"/></Arg>
- <Call name="addCustomizer">
- <Arg><New class="org.eclipse.jetty.server.SecureRequestCustomizer"/></Arg>
- </Call>
- </New>
-
-</Configure>
-
-Unless you are running mvn jetty:run as root, you should see another error now: [ERROR] Failed to execute goal org.eclipse.jetty:jetty-maven-plugin:9.0.5.v20130815:run (default-cli) on project FOOBAR: Failure: Permission denied -> [Help 1]. This is, because the ports are set to the numbers 80 and 443 of the privileged port-range.
You have to change jetty-http.xml like this:
-<?xml version="1.0"?>
-<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure_9_0.dtd">
-
-<!-- ============================================================= -->
-<!-- Configure the Jetty Server instance with an ID "Server" -->
-<!-- by adding a HTTP connector. -->
-<!-- This configuration must be used in conjunction with jetty.xml -->
-<!-- ============================================================= -->
-<Configure id="Server" class="org.eclipse.jetty.server.Server">
-
- <!-- =========================================================== -->
- <!-- Add a HTTP Connector. -->
- <!-- Configure an o.e.j.server.ServerConnector with a single -->
- <!-- HttpConnectionFactory instance using the common httpConfig -->
- <!-- instance defined in jetty.xml -->
- <!-- -->
- <!-- Consult the javadoc of o.e.j.server.ServerConnector and -->
- <!-- o.e.j.server.HttpConnectionFactory for all configuration -->
- <!-- that may be set here. -->
- <!-- =========================================================== -->
- <Call name="addConnector">
- <Arg>
- <New class="org.eclipse.jetty.server.ServerConnector">
- <Arg name="server"><Ref refid="Server" /></Arg>
- <Arg name="factories">
- <Array type="org.eclipse.jetty.server.ConnectionFactory">
- <Item>
- <New class="org.eclipse.jetty.server.HttpConnectionFactory">
- <Arg name="config"><Ref refid="httpConfig" /></Arg>
- </New>
- </Item>
- </Array>
- </Arg>
- <Set name="host"><Property name="jetty.host" /></Set>
- <Set name="port"><Property name="jetty.port" default="8080" /></Set>
- <Set name="idleTimeout"><Property name="http.timeout" default="30000"/></Set>
- </New>
- </Arg>
- </Call>
-
-</Configure>
-
-... and jetty-https.xml like this:
-<?xml version="1.0"?>
-<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure_9_0.dtd">
-
-<!-- ============================================================= -->
-<!-- Configure a HTTPS connector. -->
-<!-- This configuration must be used in conjunction with jetty.xml -->
-<!-- and jetty-ssl.xml. -->
-<!-- ============================================================= -->
-<Configure id="Server" class="org.eclipse.jetty.server.Server">
-
- <!-- =========================================================== -->
- <!-- Add a HTTPS Connector. -->
- <!-- Configure an o.e.j.server.ServerConnector with connection -->
- <!-- factories for TLS (aka SSL) and HTTP to provide HTTPS. -->
- <!-- All accepted TLS connections are wired to a HTTP connection.-->
- <!-- -->
- <!-- Consult the javadoc of o.e.j.server.ServerConnector, -->
- <!-- o.e.j.server.SslConnectionFactory and -->
- <!-- o.e.j.server.HttpConnectionFactory for all configuration -->
- <!-- that may be set here. -->
- <!-- =========================================================== -->
- <Call id="httpsConnector" name="addConnector">
- <Arg>
- <New class="org.eclipse.jetty.server.ServerConnector">
- <Arg name="server"><Ref refid="Server" /></Arg>
- <Arg name="factories">
- <Array type="org.eclipse.jetty.server.ConnectionFactory">
- <Item>
- <New class="org.eclipse.jetty.server.SslConnectionFactory">
- <Arg name="next">http/1.1</Arg>
- <Arg name="sslContextFactory"><Ref refid="sslContextFactory"/></Arg>
- </New>
- </Item>
- <Item>
- <New class="org.eclipse.jetty.server.HttpConnectionFactory">
- <Arg name="config"><Ref refid="sslHttpConfig"/></Arg>
- </New>
- </Item>
- </Array>
- </Arg>
- <Set name="host"><Property name="jetty.host" /></Set>
- <Set name="port"><Property name="https.port" default="8443" /></Set>
- <Set name="idleTimeout"><Property name="https.timeout" default="30000"/></Set>
- </New>
- </Arg>
- </Call>
-</Configure>
-
-Now, it should be running, but...
-So, now it is working. But you still have to clutter your project with several files and avoid some pitfalls (belive me or not: if you put the filenames in the <jettyXml>-tag of your pom.xml on separate lines, jetty won't start!). Last but not least, the HTTP-Connector will stop working, if you forget to add the jetty-http.xml, that is mentioned at the end of the example.
Because of that, I've created a simple 6-step quick-fix-guide to get the HTTPS-Connector of the jetty-maven-plugin running.
src/test/resources/jetty.xmlsrc/test/resources/jetty-http.xmlsrc/test/resources/jetty-ssl.xmlsrc/test/resources/jetty-https.xmlsrc/test/resources/jetty.keystorejetty-maven-plugin in your pom.xml to include the XML-configurationfiles. But be aware, the ordering of the files is important and there should be no newlines inbetween. You have been warned! It should look like:
-
-<plugin>
- <groupId>org.eclipse.jetty</groupId>
- <artifactId>jetty-maven-plugin</artifactId>
- <configuration>
- <jettyXml>
- ${project.basedir}/src/test/resources/jetty.xml,${project.basedir}/src/test/resources/jetty-http.xml,${project.basedir}/src/test/resources/jetty-ssl.xml,${project.basedir}/src/test/resources/jetty-https.xml
- </jettyXml>
- </configuration>
-</plugin>
-
-That's it. You should be done!
]]>
-I don't like visual HTML-editors, because they always mess up your HTML. So the first thing, that I've done in my wordpress-profile, was checking the check-box Disable the visual editor when writing.
-But today I found out, that this is worth nothing.
-Even when in text-mode, wordpress is adding some <p>- and <br>-tags automagically and, hence, is automagically messing up my neatly hand-crafted HTML-code.
-
Fuck wordpress! (Ehem - sorry for that outburst)...
-
-But what is even worse: after really turning off wordpress's automagically-messup-functionality, nearly all my handwritten <p>-tags were gone, too.
-So, if you want to turn of automatic <p>- and <br>-tags, you should really do it as early, as you can. Otherwise, you will have to clean up all your old posts afterwards like me. TI've lost some hours with usless HTML-editing today, because of that sh#%&*!
-
-The wordpress-documentation of the build-in HTML-editor links to this post, which describs how to disable autmatic use of paragraph tags.
-Simple open the file wp-includes/default-filters.php of you wordpress-installation and comment out the following line:
-
-
-
-addfilter('the_content', 'wpautop');
-
-
-
-If you are building your own wordpress-theme - like me - you alternatively can add the following to the functions.php-file of your theme:
-
-
-remove_filter('the_content', 'wpautop');
-
-
-
-For example, I was wondering a while, where all that whitespace in my posts were coming from.
-Being used to handcraft my HTML, I often wrote one sentence per line, or put some empty lines inbetween to clearly arange my code.
-There comes wordpress, messing everything up by automagically putting every sentence in its own paragraph, because it was written on its own line and putting <br> inbetween, to reflect my empty lines.
-
-But even worse, wordpress also puts these unwanted <p>-tags arround HTML-code, that breaks because of it.
-For example, I eventually found out about this auto-messup functionallity, because I was checking my blog-post with a html-validator and was wondering, why the validator was grumping about a <quote>-tag inside flow content, which I've never put there. It turned out, that wordpress had put it there for me...
-
-Fehler
-Der Nutzer ist nicht dazu berechtigt, diese Anwendung zu sehen.:
-Der Benutzer ist nicht berrechtigt diese Applikation an zusehen. Der Entwickler hat dies so eingestellt.
-
Da dazu nichts bei Googel zu finden war, hier die einfache Erklärung, was da schief läuft:
-- -Du hast die bei Facebook als Testbenutzer einer deiner Apps eingeloggt und das beim Zugriff auf eine andere App vergessen! - -
--Die Testbenutzer einer App dürfen offensichtlich nur auf diese App und sonst auf keine Seiten/Apps in Facebook zugreifen - macht ja auch Sinn. -Verwirrend nur, dass Facebook behauptet, man hättte da etwas selber von Hand eingestellt... -
]]>-This release mainly is a library-upgrade to version 4.3.1.Final of hibernate. -It also includes some bug-fixes provided by the community. -Please see the release notes for details. -
--It took us quiet some time, to release this version and we are sorry for that. -But with a growing number of users, we are becoming more anxious to break some special use-cases. -Because of that, we started to add some integration-tests, to avoid that hassle, and that took us some time... -
--If you have some special small-sized (example) use-cases for the plugin, we would appreciate it, if you would provide them to us, so we can add them es additional integration-tests. -
-
-commit f3dabc0e6e3676244986b5bbffdb67d427c8383c
-Author: Kai Moritz
-Date: Mon Jun 2 10:31:12 2014 +0200
-
- [maven-release-plugin] prepare release hibernate4-maven-plugin-1.0.4
-
-commit 856dd31c9b90708e841163c91261a865f9efd224
-Author: Kai Moritz
-Date: Mon Jun 2 10:12:24 2014 +0200
-
- Updated documentation
-
-commit 64900890db2575b7a28790c5e4d5f45083ee94b3
-Author: Kai Moritz
-Date: Tue Apr 29 20:43:15 2014 +0200
-
- Switched documentation to xhtml, to be able to integrate google-pretty-print
-
-commit bd78c276663790bf7a3f121db85a0d62c64ce38c
-Author: Kai Moritz
-Date: Tue Apr 29 19:42:41 2014 +0200
-
- Fixed bug in site-configuration
-
-commit 1628bcf6c9290a729352215ee22e5b48fa628c4c
-Author: Kai Moritz
-Date: Tue Apr 29 18:07:44 2014 +0200
-
- Verifying generated SQL in integration-test hibernate4-maven-plugin-envers-sample
-
-commit 25079f13c0eda6807d5aee67086a21ddde313213
-Author: Kai Moritz
-Date: Tue Apr 29 18:01:10 2014 +0200
-
- Added integration-test provided by Erik-Berndt Scheper
-
-commit 69458703cddc2aea1f67e06db43bce6950c6f3cb
-Author: Kai Moritz
-Date: Tue Apr 29 17:52:17 2014 +0200
-
- Verifying generated SQL in integration-test schemaexport-example
-
-commit a53a2ad438038084200a8449c557a41159e409dc
-Author: Kai Moritz
-Date: Tue Apr 29 17:46:05 2014 +0200
-
- Added integration-test provided by Guido Wimmel
-
-commit f18f820198878cddcea8b98c2a5e0c9843b923d2
-Author: Kai Moritz
-Date: Tue Apr 29 09:43:06 2014 +0200
-
- Verifying generated SQL in integration-test hib-test
-
-commit 4bb462610138332087d808a62c84a0c9776b24cc
-Author: Kai Moritz
-Date: Tue Apr 29 08:58:33 2014 +0200
-
- Added integration-test provided by Joel Johnson
-
-commit c5c4c7a4007bc2bd58b850150adb78f8518788da
-Author: Kai Moritz
-Date: Tue Apr 29 08:43:28 2014 +0200
-
- Prepared POM for integration-tests via invoker-maven-plugin
-
-commit d8647fedfe936f49476a5c1f095d51a9f5703d3d
-Author: Kai Moritz
-Date: Tue Apr 29 08:41:50 2014 +0200
-
- Upgraded Version of maven from 3.0.4 to 3.2.1
-
-commit 1979c6349fc2a9e0fe3f028fa1cc76557b32031c
-Author: Frank Schimmel
-Date: Wed Feb 12 15:16:18 2014 +0100
-
- Properly support constraints expressed by bean validation (jsr303) annotations.
-
- * Access public method of package-visible TypeSafeActivator class without reflection.
- * Fix arguments to call of TypeSafeActivator.applyRelationalConstraints().
- * Use hibernate version 4.3.1.Final for all components.
- * Minor refactorings in exception handling.
-
-commit c3a16dc3704517d53501914bb8a0f95f856585f4
-Author: Kai Moritz
-Date: Fri Jan 17 09:05:05 2014 +0100
-
- Added last contributors to the POM
-
-commit 5fba40e135677130cbe0ff3c59f6055228293d92
-Author: Mark Robinson
-Date: Fri Jan 17 08:53:47 2014 +0100
-
- Generated schema now corresponds to hibernate validators set on the beans
-
-commit aedcc19cfb89a8b387399a978afab1166be816e3
-Author: Kai Moritz
-Date: Thu Jan 16 18:33:32 2014 +0100
-
- Upgrade to Hibernate 4.3.0.Final
-
-commit 734356ab74d2896ec8d7530af0d2fa60ff58001f
-Author: Kai Moritz
-Date: Thu Jan 16 18:23:12 2014 +0100
-
- Improved documentation of the dependency-scanning on the pitfalls-page
-
-commit f2955fc974239cbb266922c04e8e11101d7e9dd9
-Author: Joel Johnson
-Date: Thu Dec 26 14:33:51 2013 -0700
-
- Text cleanup, spelling, etc.
-
-commit 727d1a35bb213589270b097d04d5a1f480bffef6
-Author: Joel Johnson
-Date: Thu Dec 26 14:02:29 2013 -0700
-
- Make output file handling more robust
-
- * Ensure output file directory path exists
- * Anchor relative paths in build directory
-
-commit eeb182205a51c4507e61e1862af184341e65dbd3
-Author: Joel Johnson
-Date: Thu Dec 26 13:53:37 2013 -0700
-
- Check that md5 path is file and has content
-
-commit 64c0a52bdd82142a4c8caef18ab0671a74fdc6c1
-Author: Joel Johnson
-Date: Thu Dec 26 11:25:34 2013 -0700
-
- Use more descriptive filename for schema md5
-
-commit ba2e48a347a839be63cbce4b7ca2469a600748c6
-Author: Joel Johnson
-Date: Thu Dec 26 11:20:24 2013 -0700
-
- Offer explicit disable option
-
- Use an explicit disable property, but still default it to test state
-
-commit e44434257040745e66e0596b262dd0227b085729
-Author: Kai Moritz
-Date: Fri Oct 18 01:55:11 2013 +0200
-
- [maven-release-plugin] prepare for next development iteration
-
- ]]>log4j.properties (or log4j.xml), when I fired up my web-application in development-mode under Jetty with mvn jetty:run.
-But if I installed the application on the production-server, which uses a Tomcat 7 servlet-container, no special logger-configuration where picked up from my configuration-file.
-But - very strange - my configuration-file was not ignored completely.
-The appender-configuration and the log-level from the root-logger where picked up from my configuration-file.
-Only all special logger-configuration were ignored.
-
-Here is my configuration, as it was when I run into the problem:
-- As said before: - All worked as expected while developing under Jetty and in production under Tomcat, only special logger-confiugrations where ignored. -
-- Because of that, it took me quiet a while and a lot of reading, to figure out, that this was not a configuration-issue, but a clash of libraries. - The cause of this strange behaviour were the fact, that one must not use the log4j-binding slf4j-log4j12 and the log4j-bridge log4j-over-slf4j together. -
-- This fact is quiet logically, because it should push all your logging-statements into an endless loop, where they are handed back and forth between sl4fj and log4j as stated in the sl4fj-documentation here. - But if you see all your log-messages in development and in production only the configuration behaves strangley, this mistake is realy hard to figure out! - So, I hope I can save you some time by dragging your attention to this. -
-- Only the cause is hard to find. - The solution is very simple: - Just switch from log4j to logback. -
-- There are some more good reasons, why you should do this anyway, over which you can learn more here. -
- - - - - ]]>
-class Outer
-{
- void outer(Inner inner)
- {
- }
-
- class Inner
- {
- Outer outer;
-
- void inner()
- {
- outer.outer(this);
- }
- }
-}
-
-
-This code might look very useless.
-Originally, it Inner was a Thread, that wants to signal its enclosing class, that it has finished some work.
-I just striped down all other code, that was not needed, to trigger the error.
-
-If you put the class Outer in a maven-project and configure the aspectj-maven-plugin to weave this class with compliance-level 1.6, you will get the following error:
-
-[ERROR] Failed to execute goal org.codehaus.mojo:aspectj-maven-plugin:1.6:compile (default-cli) on project shouter: Compiler errors:
-[ERROR] error at outer.inner(this);
-[ERROR]
-[ERROR] /home/kai/juplo/shouter/src/main/java/Outer.java:16:0::0 The method inner(Outer.Inner) is undefined for the type Outer
-[ERROR] error at queue.done(this, System.currentTimeMillis() - start);
-[ERROR]
-
--The normal compilation works, because the class is syntactically correct Java-7.0-Code. -But the AspectJ-Compiler (Version 1.7.4) bundeled with the aspectj-maven-pluign will fail! -
--Fortunately, I found out, how to use the aspectj-maven-plugin with AspectJ 1.8.3. -
--So, if you have a similar problem, read on... -
]]>-Using the current version (Version 1.8.1) of AspectJ solves this issue. -But unfortunatly, there is no new version of the aspectj-maven-plugin available, that uses this new version of AspectJ. -The last version of the aspectj-maven-plugin was released to Maven Central on December the 4th 2013 and this versions is bundeled with the version 1.7.2 of AspectJ. -
--The simple solution is, to bring the aspectj-maven-plugin to use the current version of AspectJ. -This can be done, by overwriting its dependency to the bundled aspectj. -This definition of the plugin does the trick: -
-
-<plugin>
- <groupId>org.codehaus.mojo</groupId>
- <artifactId>aspectj-maven-plugin</artifactId>
- <version>1.6</version>
- <configuration>
- <complianceLevel>1.7</complianceLevel>
- <aspectLibraries>
- <aspectLibrary>
- <groupId>org.springframework</groupId>
- <artifactId>spring-aspects</artifactId>
- </aspectLibrary>
- </aspectLibraries>
- </configuration>
- <executions>
- <execution>
- <goals>
- <goal>compile</goal>
- </goals>
- </execution>
- </executions>
- <dependencies>
- <dependency>
- <groupId>org.aspectj</groupId>
- <artifactId>aspectjtools</artifactId>
- <version>1.8.1</version>
- </dependency>
- </dependencies>
-</plugin>
-
--The crucial part is the explicit dependency, the rest depends on your project and might have to be adjusted accordingly: -
-
- <dependencies>
- <dependency>
- <groupId>org.aspectj</groupId>
- <artifactId>aspectjtools</artifactId>
- <version>1.8.1</version>
- </dependency>
- </dependencies>
-
--I hope, that helps, folks! -
]]>
-This release mainly fixes a NullPointerException-bug, that was introduced in 1.0.4.
-The NPE was triggered, if a hibernate.properties-file is present and the dialect is specified in that file and not in the plugin configuration.
-Thanks to Paulo Pires and and everflux, for pointing me at that bug.
-
-But there are also some minor improvements to talk about: -
-Hibernate Core was upgraded to 4.3.7.FinalHibernate Envers was upgraded to 4.3.7.FinalHibernate Validator was upgrades to 5.1.3.Final
-
-The upgrade of Hibernate Validator is a big step, because 5.x supports Bean Validation 1.1 (JSR 349).
-See the FAQ of hibernate-validator for more details on this.
-
-Because Hibernate Validator 5 requires the Unified Expression Language (EL) in version 2.2 or later, a dependency to javax.el-api:3.0.0 was added.
-That does the trick for the integration-tests included in the source code of the plugin.
-But, because I am not using Hibernate Validator in any of my own projects, at the moment, the upgrade may rise some backward compatibility errors, that I am not aware of.
-If you stumble across any problems, please let me know!
-
-commit ec30af2068f2d12a9acf65474ca1a4cdc1aa7122
-Author: Kai Moritz
-Date: Tue Nov 11 15:28:12 2014 +0100
-
- [maven-release-plugin] prepare for next development iteration
-
-commit 18840e3c775584744199d8323eb681b73b98e9c4
-Author: Kai Moritz
-Date: Tue Nov 11 15:27:57 2014 +0100
-
- [maven-release-plugin] prepare release hibernate4-maven-plugin-1.0.5
-
-commit b95416ef16bbaafecb3d40888fe97e70cdd75c77
-Author: Kai Moritz
-Date: Tue Nov 11 15:10:32 2014 +0100
-
- Upgraded hibernate-validator from 4.3.2.Final to 5.1.3.Final
-
- Hibernate Validator 5 requires the Unified Expression Language (EL) in
- version 2.2 or later. Therefore, a dependency to javax.el-api:3.0.0 was
- added. (Without that, the compilation of some integration-tests fails!)
-
-commit ad979a8a82a7701a891a59a183ea4be66672145b
-Author: Kai Moritz
-Date: Tue Nov 11 14:32:42 2014 +0100
-
- Upgraded hibernate-core, hibernate-envers, hibernate-validator and maven-core
-
- * Upgraded hibernate-core from 4.3.1.Final to 4.3.7.Final
- * Upgraded hibernate-envers from 4.3.1.Final to 4.3.7.Final
- * Upgraded hibernate-validator from 4.3.1.Final to 4.3.2.Final
- * Upgraded maven-core from 3.2.1 to 3.2.3
-
-commit 347236c3cea0f204cefd860c605d9f086e674e8b
-Author: Kai Moritz
-Date: Tue Nov 11 14:29:23 2014 +0100
-
- Added FAQ-entry for problem with whitespaces in the path under Windows
-
-commit 473c3ef285c19e0f0b85643b67bbd77e06c0b926
-Author: Kai Moritz
-Date: Tue Oct 28 23:37:45 2014 +0100
-
- Explained how to suppress dependency-scanning in documentation
-
- Also added a test-case to be sure, that dependency-scanning is skipped, if
- the parameter "dependencyScanning" is set to "none".
-
-commit 74c0dd783b84c90e116f3e7f1c8d6109845ba71f
-Author: Kai Moritz
-Date: Mon Oct 27 09:04:48 2014 +0100
-
- Fixed NullPointerException, when dialect is specified in properties-file
-
- Also added an integration test-case, that proofed, that the error was
- solved.
-
-commit d27f7af23c82167e873ce143e50ce9d9a65f5e61
-Author: Kai Moritz
-Date: Sun Oct 26 11:16:00 2014 +0100
-
- Renamed an integration-test to test for whitespaces in the filename
-
-commit 426d18e689b89f33bf71601becfa465a00067b10
-Author: Kai Moritz
-Date: Sat Oct 25 17:29:41 2014 +0200
-
- Added patch by Joachim Van der Auwera to support package level annotations
-
-commit 3a3aeaabdb1841faf5e1bf8d220230597fb22931
-Author: Kai Moritz
-Date: Sat Oct 25 16:52:34 2014 +0200
-
- Integrated integration test provided by Claus Graf (clausgraf@gmail.com)
-
-commit 3dd832edbd50b1499ea6d53e4bcd0ad4c79640ed
-Author: Kai Moritz
-Date: Mon Jun 2 10:31:13 2014 +0200
-
- [maven-release-plugin] prepare for next development iteration
- ]]>
-@Table(name = "T1", uniqueConstraints=@UniqueConstraint(name="U_REFERENZ", columnNames="REFERENCE"))
-
-
-It also seems, that the uniqueConstraint annotation embedded in the table has no effect.
-
-Are me missing some configuration part, is it by intention or have me spotted a minor glitch?
-Thanks for a short reply.
-
-Again ... thanks for the great plugin
-cheers
-Klaus]]>RestTemplate is quite easy, if you know, what to do.
-But it is rather hard, if you have no clue where to start.
-Hence, I want to give you some hints in this post.
-
-
-In its default configuration, the RestTemplate uses the HttpClient of the Apache HttpComponents package.
-You can verify this and the used version with the mvn-command
-
-mvn dependency:tree
-
--To enable for example logging of the HTTP-Headers send and received, you then simply can add the following to your logging configuration: -
-
-<logger name="org.apache.http.headers">
- <level value="debug"/>
-</logger>
-
-
-If that does not work, you should check, which version of the Apache HttpComponents your project actually is using, because the name of the logger has changed between version 3.x and 4.x.
-Another common cause of problems is, that the Apache HttpComponets uses Apache Commons Logging.
-If the jar for that library is missing, or if your project uses another logging library, the messages might get lost because of that.
-
-
-SELECTOR
-{
- text-indent: -99em;
- line-height: 0;
-}
-SELECTOR:after
-{
- display: block;
- text-indent: 0;
- content: REPLACEMENT;
-}
-
-
-
-SELECTOR can be any valid CSS-selector.
-REPLACEMENT references the graphic, which should replace the text.
-This can be a SVG-graphic, a vector-graphics from a font, any bitmap graphic or (quiet useless, but a simple case to understand the source like in the first of my two examples) other text.
-SVG- and bitmap-graphics are simply referred by an url in the content-directive, like I have done it with a data-url in my second example.
-For the case of an icon embedded in a vector you simply put the character-code of the icon in the content-directive, like described in the according ALA-article.
-
-If you need backward compatibility for Internet Explorer 8 and below or Android 2.3 and below, you have to use icon-fonts to support these old browsers. -I use this often, if I have a brand logo, that should be inserted in a accessible way and do not want to bloat up the html-markup with useless tag's, to achieve this. -
]]>
-The main work in this release were modification to the process of configuration-gathering.
-The plugin now also is looking for a hibernate.cfg.xml on the classpath or a persistence-unit specified in a META-INF/persistence.xml.
-
-With this enhancement, the plugin is now able to deal with all examples from the official -Hibernate Getting Started Guide. -
--All configuration infos found are merged together with the same default precedences applied by hibernate. -So, the overall order, in which possible configuration-sources are checked is now (each later source might overwrite settings of a previous source): -
-hibernate.propertieshibernate.cfg.xmlpersistence.xml-Because the possible new configuration-sources might change the expected behavior of the plugin, we lifted the version to 1.1. -
--This release also fixes a bug, that occured on some platforms, if the path to the project includes one or more space characters. -
-
-commit 94e6b2e93fe107e75c9d20aa1eb3126e78a5ed0a
-Author: Kai Moritz
-Date: Sat May 16 14:14:44 2015 +0200
-
- Added script to check outcome of the hibernate-tutorials
-
-commit b3f8db2fdd9eddbaac002f94068dd1b4e6aef9a8
-Author: Kai Moritz
-Date: Tue May 5 12:43:15 2015 +0200
-
- Configured hibernate-tutorials to use the plugin
-
-commit 4b6fc12d443b0594310e5922e6ad763891d5d8fe
-Author: Kai Moritz
-Date: Tue May 5 12:21:39 2015 +0200
-
- Fixed the settings in the pom's of the tutorials
-
-commit 70bd20689badc18bed866b3847565e1278433503
-Author: Kai Moritz
-Date: Tue May 5 11:49:30 2015 +0200
-
- Added tutorials of the hibernate-release 4.3.9.Final as integration-tests
-
-commit 7e3e9b90d61b077e48b59fc0eb63059886c68cf5
-Author: Kai Moritz
-Date: Sat May 16 11:04:36 2015 +0200
-
- JPA-jdbc-properties are used, if appropriate hibernate-properties are missing
-
-commit c573877a186bec734915fdb3658db312e66a9083
-Author: Kai Moritz
-Date: Thu May 14 23:43:13 2015 +0200
-
- Hibernate configuration is gathered from class-path by default
-
-commit 2a85cb05542795f9cd2eed448f212f92842a85e8
-Author: Kai Moritz
-Date: Wed May 13 09:44:18 2015 +0200
-
- Found no way to check, that mapped classes were found
-
-commit 038ccf9c60be6c77e2ba9c2d2a2a0d261ce02ccb
-Author: Kai Moritz
-Date: Tue May 12 22:13:23 2015 +0200
-
- Upgraded scannotation from 1.0.3 to 1.0.4
-
- This fixes the bug that occures on some platforms, if the path contains a
- space. Created a fork of scannotation to bring the latest bug-fixes from SVN
- to maven central...
-
-commit c43094689043d7da04df6ca55529d0f0c089d820
-Author: Kai Moritz
-Date: Sun May 10 19:06:27 2015 +0200
-
- Added javadoc-jar to deployed artifact
-
-commit 524cb8c971de87c21d0d9f0e04edf6bd30f77acc
-Author: Kai Moritz
-Date: Sat May 9 23:48:39 2015 +0200
-
- Be sure to relase all resources (closing db-connections!)
-
-commit 1e5cca792c49d60e20d7355eb97b13d591d80af6
-Author: Kai Moritz
-Date: Sat May 9 22:07:31 2015 +0200
-
- Settings in a hibernate.cfg.xml are read
-
-commit 9156c5f6414b676d34eb0c934e70604ba822d09a
-Author: Kai Moritz
-Date: Tue May 5 23:42:40 2015 +0200
-
- Catched NPE, if hibernate-dialect is not set
-
-commit 62859b260a47e70870e795304756bba2750392e3
-Author: Kai Moritz
-Date: Sun May 3 18:53:24 2015 +0200
-
- Upgraded oss-type, maven-plugin-api and build/report-plugins
-
-commit c1b3b60be4ad2c5c78cb1e3706019dfceb390f89
-Author: Kai Moritz
-Date: Sun May 3 18:53:04 2015 +0200
-
- Upgraded hibernate to 4.3.9.Final
-
-commit 038ccf9c60be6c77e2ba9c2d2a2a0d261ce02ccb
-Author: Kai Moritz
-Date: Tue May 12 22:13:23 2015 +0200
-
- Upgraded scannotation from 1.0.3 to 1.0.4
-
- This fixes the bug that occures on some platforms, if the path contains a
- space. Created a fork of scannotation to bring the latest bug-fixes from SVN
- to maven central...
-
-commit c43094689043d7da04df6ca55529d0f0c089d820
-Author: Kai Moritz
-Date: Sun May 10 19:06:27 2015 +0200
-
- Added javadoc-jar to deployed artifact
-
-commit 524cb8c971de87c21d0d9f0e04edf6bd30f77acc
-Author: Kai Moritz
-Date: Sat May 9 23:48:39 2015 +0200
-
- Be sure to relase all resources (closing db-connections!)
-
-commit 1e5cca792c49d60e20d7355eb97b13d591d80af6
-Author: Kai Moritz
-Date: Sat May 9 22:07:31 2015 +0200
-
- Settings in a hibernate.cfg.xml are read
-
-commit 9156c5f6414b676d34eb0c934e70604ba822d09a
-Author: Kai Moritz
-Date: Tue May 5 23:42:40 2015 +0200
-
- Catched NPE, if hibernate-dialect is not set
-
-commit 62859b260a47e70870e795304756bba2750392e3
-Author: Kai Moritz
-Date: Sun May 3 18:53:24 2015 +0200
-
- Upgraded oss-type, maven-plugin-api and build/report-plugins
-
-commit c1b3b60be4ad2c5c78cb1e3706019dfceb390f89
-Author: Kai Moritz
-Date: Sun May 3 18:53:04 2015 +0200
-
- Upgraded hibernate to 4.3.9.Final
-
-commit 038ccf9c60be6c77e2ba9c2d2a2a0d261ce02ccb
-Author: Kai Moritz
-Date: Tue May 12 22:13:23 2015 +0200
-
- Upgraded scannotation from 1.0.3 to 1.0.4
-
- This fixes the bug that occures on some platforms, if the path contains a
- space. Created a fork of scannotation to bring the latest bug-fixes from SVN
- to maven central...
-
-commit c43094689043d7da04df6ca55529d0f0c089d820
-Author: Kai Moritz
-Date: Sun May 10 19:06:27 2015 +0200
-
- Added javadoc-jar to deployed artifact
-
-commit 524cb8c971de87c21d0d9f0e04edf6bd30f77acc
-Author: Kai Moritz
-Date: Sat May 9 23:48:39 2015 +0200
-
- Be sure to relase all resources (closing db-connections!)
-
-commit 1e5cca792c49d60e20d7355eb97b13d591d80af6
-Author: Kai Moritz
-Date: Sat May 9 22:07:31 2015 +0200
-
- Settings in a hibernate.cfg.xml are read
-
-commit 9156c5f6414b676d34eb0c934e70604ba822d09a
-Author: Kai Moritz
-Date: Tue May 5 23:42:40 2015 +0200
-
- Catched NPE, if hibernate-dialect is not set
-
-commit 62859b260a47e70870e795304756bba2750392e3
-Author: Kai Moritz
-Date: Sun May 3 18:53:24 2015 +0200
-
- Upgraded oss-type, maven-plugin-api and build/report-plugins
-
-commit c1b3b60be4ad2c5c78cb1e3706019dfceb390f89
-Author: Kai Moritz
-Date: Sun May 3 18:53:04 2015 +0200
-
- Upgraded hibernate to 4.3.9.Final
-
-commit 248ff3220acc8a2c11281959a1496adc024dd4df
-Author: Kai Moritz
-Date: Sun May 3 18:09:12 2015 +0200
-
- Renamed nex release to 1.1.0
-
-commit 2031d4cfdb8b2d16e4f2c7bbb5c03a15b4f64b21
-Author: Kai Moritz
-Date: Sun May 3 16:48:43 2015 +0200
-
- Generation of tables and rows for auditing is now default
-
-commit 42465d2a5e4a5adc44fbaf79104ce8cc25ecd8fd
-Author: Kai Moritz
-Date: Sun May 3 16:20:58 2015 +0200
-
- Fixed mojo to scan for properties in persistence.xml
-
-commit d5a4326bf1fe2045a7b2183cfd3d8fdb30fcb406
-Author: Kai Moritz
-Date: Sun May 3 14:51:12 2015 +0200
-
- Added an integration-test, that depends on properties from a persistence.xml
-
-commit 5da1114d419ae10f94a83ad56cea9856a39f00b6
-Author: Kai Moritz
-Date: Sun May 3 14:51:46 2015 +0200
-
- Switched to usage of a ServiceRegistry
-
-commit fed9fc9e4e053c8b61895e78d1fbe045fadf7348
-Author: Kai Moritz
-Date: Sun May 3 11:42:54 2015 +0200
-
- Integration-Test for envers really generates the SQL
-
-commit fee05864d61145a06ee870fbffd3bff1e95af08c
-Author: Kai Moritz
-Date: Sun Mar 15 16:56:22 2015 +0100
-
- Extended integration-test "hib-test" to check for package-level annotations
-
-commit 7518f2a7e8a3d900c194dbe61609efa34ef047bd
-Author: Kai Moritz
-Date: Sun Mar 15 15:42:01 2015 +0100
-
- Added support for m2e
-
- Thanks to Andreas Khutz
-
- ]]>-HTML5 introduces new semantic elements accompained by the definition of a new algorithm to calculate the document-outline from the mark up. -There are plenty of good explanations of these new possibilities, to point out your content in a more controlled way. -But the most of these explanations fall short, if it comes to how to put these new markup into use, so that it results in a sensible outline of the document, that was marked up. -
--In this article I will try to explain, how to use the new semantic markup, to produce an outline, that is usable as a real content table of the document - not just as an partially orderd overview of all headings. -I will do so, by showing simple examples, that will illuminate the principles behind the new markup. -
-
-Although, the ideas behind the new markup seems to be simple and clear, nearly nobody accomplishes to produce a sensible outline.
-Even the big players, who guide us through the jungle of the new specifications and are giving great explanations about the subject, either fail on there sites (see by yourself with the help of the help of the h5o HTML5 Outline Bookmarklet), or produce the outline in the old way by the usage of h1-h6 only, like the fabulous HTML5-bible Dive Into HTML5.
-
-This is, because there is a lot to mix up in a wrong way, when trying to adopt the new features. -Here is, what I ended up with, on my first try to combine what I have learned about semantic elements and the document outline: -
-
-
-<!DOCTYPE html>
-<title>Example 01</title>
-<header>
- <h2>Header</h2>
- <nav>Navigation</nav>
-</header>
-<main>
- <h1>Main</h1>
- <section>
- <h2>Section I</h2>
- </section>
- <section>
- <h2>Section II</h2>
- <section>
- <h3>Subsection a</h3>
- </section>
- <section>
- <h3>Subsection b</h3>
- </section>
- </section>
- <section>
- <h2>Section III</h2>
- <section>
- <h3>Subsection a</h3>
- </section>
- </section>
-</main>
-<aside>
- <h1>Aside</h1>
-</aside>
-<footer>
- <h2>Footer</h2>
-</footer>
-
-- View example 01 -
-
-That quiet was not the outline, that I had expected.
-I planed, that Header, Main, Aside and Footer are ending up at the same level.
-Instead of that, Aside and Footer had become sections of my Main-content.
-And where the hell comes that Untitled section from?!?
-My first thought on that was: No problem, I just forgot the header-tags.
-But after adding them, the only thing that cleared out, was where the Untitled section was coming from:
-
-<!DOCTYPE html>
-<title>Example 02</title>
-<header>
- <h2>Header</h2>
- <nav>
- <header><h3>Navigation</h3></header>
- </nav>
-</header>
-<main>
- <header><h1>Main</h1></header>
- <section>
- <header><h2>Section I</h2></header>
- </section>
- <section>
- <header><h2>Section II</h2></header>
- <section>
- <header><h3>Subsection a</h3></header>
- </section>
- <section>
- <header><h3>Subsection b</h3></header>
- </section>
- </section>
- <section>
- <header><h2>Section III</h2></header>
- <section>
- <header><h3>Subsection a</h3></header>
- </section>
- </section>
-</main>
-<footer>
- <header><h2>Footer</h2></header>
-
-- View example 02 -
-
-So I thought: Maybe the main-tag was the wrong choice.
-Perhaps it should be replaced by an article.
-But after that change, the outline even got worse.
-Now, Navigation, Main and Aside appeared on the same level, all as a subsection of Header.
-At least, Footer suddenly was a sibling of Header as planed:
-
-<!DOCTYPE html>
-<title>Example 03</title>
-<header>
- <h2>Header</h2>
- <nav>
- <header><h3>Navigation</h333></header>
- </nav>
-</header>
-<article>
- <header><h1>Article (Main)</h1></header>
- <section>
- <header><h2>Section I</h2></header>
- </section>
- <section>
- <header><h2>Section II</h2></header>
- <section>
- <header><h3>Subsection a</h3></header>
- </section>
- <section>
- <header><h3>Subsection b</h3></header>
- </section>
- </section>
- <section>
- <header><h2>Section III</h2></header>
- <section>
- <header><h3>Subsection a</h3></header>
- </section>
- </section>
-</article>
-<footer>
- <header><h2>Footer</h2></header>
-</footer>
-
-- View example 03 -
--After that, I was totally confused and decided, to sort it out step by step. -That procedure finally gave me the clue, I want to share with you now. -
-
-Let us start with the strictly structured part of the document: the article and it's subsections.
-At first a minimal example with no markup except the article- and the section-tags:
-
-<!DOCTYPE html>
-<title>Example 04</title>
-<article>
- Main
- <section>
- Section I
- </section>
- <section>
- Section II
- <section>
- Subsection a
- </section>
- <section>
- Subsection b
- </section>
- </section>
- <section>
- Section III
- <section>
- Subsection a
- </section>
- </section>
-</main>
-
-- View Example 04 -
-
-Nothing really unexpected here.
-The article- and section-tags are reflected in the outline according to their nesting.
-The only thing notably here is, that the body itself is also reflected in the outline.
-It appears on its own level as the root-element of all tags.
-We can think of it as the title of our document.
-
-We can add headings of any kind (h1-h6) here and will always get an identically structured outline, that reflects the text of our headings.
-If we want to give the body a title, we have to place a heading outside and before any sectioning-elements:
-
-<!DOCTYPE html>
-<title>Example 05</title>
-<h1>Page</h1>
-<article>
- <h1>Article</h1>
- <section>
- <h1>Section I</h1>
- </section>
- <section>
- <h1>Section II</h1>
- <section>
- <h1>Subsection a</h1>
- </section>
- <section>
- <h1>Subsection b</h1>
- </section>
- </section>
- <section>
- <h1>Section III</h1>
- <section>
- <h1>Subsection a</h1>
- </section>
- </section>
-</article>
-
-- View Example 05 -
--This is the new part of the outline algorithm introduced in HTML5: The nesting of elements, that define sections, defines the outline of the document. -The rank of the heading element is ignored by this algorithm! -
-
-Among the elements, that define sections in HTML5 are the article and the section tags.
-But there are more.
-I will discuss them later.
-For now, you only have to know, that in HTML5, sectioning elements define the structure of the outline.
-Also, you should memorize, that the outline always has a single root without any siblings: the body.
-
-So, let us do the same with the tags that represent the different logical sections of a web-page: the page-elements.
-We start with a minimal example again, that contains no markup except the header- the main and the footer-tags:
-
-<!DOCTYPE html>
-<title>Example 06</title>
-<header>Page</header>
-<main>Main</main>
-<footer>Footer</footer>
-
-
-That is wired, ehh?
-There is only one untitled element in the outline.
-The explanation for this is, that neither the header- nor the main- nor the footer-tag belong to the elements, that define a section in HTML5!
-This is often confused, because these elements define the logical sections (header – main-content – footer) of a website.
-But these logical sections do not have to do anything with the structural sectioning of the document, that defines the outline.
-
-So, what happens, if we add the desired markup for our headings?
-We want a h1-heading for our main-content, because it is the important part of our page.
-The header should have a h2-heading and the footer a h3-heading, because it is rather unimportant.
-
-<!DOCTYPE html>
-<title>Example 07</title>
-<header><h2>Page</h2></header>
-<main><h1>Main</h1></main>
-<footer><h3>Footer</h3></footer>
-
--Now, there is an outline again. -But why? -And why is it looking this way? -
-
-What happens here, is implicit sectioning.
-In short, implicit sectioning is the outline algorithm of HTML4.
-HTML5 needs implicit sectioning, to keep compatible with HTML4, which still dominates the web.
-In fact, we could have used plain HTML4, with div instead of header, main and footer, and it would have yield the exact same outline:
-
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
-<html>
- <head><title>Example 08</title></head>
- <body>
- <div class="header"><h2>Page</h2></div>
- <div class="main"><h1>Main</h1></div>
- <div class="footer"><h3>Footer</h3></div>
- </body>
-</html>
-
-
-In HTML4, solely the headings (h1-h6) define the outline of a document.
-The enclosing elements or any nesting of them are ignored altogether.
-The level, at which a heading appears in the outline, is defined by the rank of the heading alone.
-(Strictly speaking, HTML4 does not define anything like a document outline.
-But as a result of the common usage and interpretation, this is, how people outline their documents with HTML4.)
-
-The implicit sectioning of HTML5 works in a way, that is backward compatible with this way of outlining, but closes the gaps in the resulting hierarchy: -Each heading implicitly opens a section – hence the name –, but if there is a gap between its rank and the rank of its ancestor – that is the last preceding heading with a higher rank – it is placed in the level directly beneath its ancestor: -
-
-<!DOCTYPE html>
-<title>Example 09</title>
-<h4>h4</h4>
-<h2>h2</h2>
-<h4>h4</h4>
-<h3>h3</h3>
-<h2>h2</h2>
-<h1>h1</h1>
-<h2>h2</h2>
-<h3>h3</h3>
-
-- View Example 09 -
-
-See, how the first heading h4 ends up on the same level as the second, which is a h2.
-Or, how the third and fourth headings are both on the same level under the h2, although they are of different rank.
-And note, how the h2 and h3 end up on different sectioning-levels as their earlier appearances, if they follow a h1 in the natural order.
-
-With the gathered clues in mind, we can now retry to layout our document with the desired outline.
-If we want, that Header, Main and Footer end up as top level citizens in our planed outline, we simply have to achieve, that they are all recognized as sections under the top level by the HTML5 outline algorithm.
-We can do that, by explicitly stating, that the header and the footer are section:
-
-<!DOCTYPE html>
-<title>Example 10</title>
-<header>
- <section>
- <h2>Main</h2>
- </section>
-</header>
-<main>
- <article>
- <h1>Article</h1>
- <section>
- <h2>Section I</h2>
- </section>
- <section>
- <h2>Section II</h2>
- <section>
- <h3>Subsection a</h3>
- </section>
- <section>
- <h3>Subsection b</h3>
- </section>
- </section>
- <section>
- <h2>Section III</h2>
- <section>
- <h3>Subsection a</h3>
- </section>
- </section>
- </article>
-</main>
-<footer>
- <section>
- <h3>Footer</h3>
- </section>
-</footer>
-
-- View Example 10 -
--So far, so good. -But what about the untitled body? -We forgot about the single root of any outline, that is defined by the body, how we learned back in step 1. As shown in example 05, we can simply name that by putting a heading outside and before any element, that defines a section: -
-
-<!DOCTYPE html>
-<title>Example 11</title>
-<header>
- <h2>Page</h2>
- <section>
- <h3>Header</h3>
- </section>
-</header>
-<main>
- <article>
- <h1>Article</h1>
- <section>
- <h2>Section I</h2>
- </section>
- <section>
- <h2>Section II</h2>
- <section>
- <h3>Subsection a</h3>
- </section>
- <section>
- <h3>Subsection b</h3>
- </section>
- </section>
- <section>
- <h2>Section III</h2>
- <section>
- <h3>Subsection a</h3>
- </section>
- </section>
- </article>
-</main>
-<footer>
- <section>
- <h3>Footer</h3>
- </section>
-</footer>
-
-- View Example 11 -
-
-The eagle-eyed among you might have noticed, that I had "forgotten" the two element-types nav and aside, when we were investigating the elements, that define the logical structure of the page in step 2.
-I did not forgot about these – I left them out intentionally.
-Because otherwise, the results of example 07 would have been too confusing, to made my point about implicit sectioning.
-Let us look, what would have happend:
-
-<!DOCTYPE html>
-<title>Example 12</title>
-<header>
- <h1>Page</h1>
- <nav><h1>Navigation</h1></nav>
-</header>
-<main><h1>Main</h1></main>
-<aside><h1>Aside</h1></aside>
-<footer><h1>Footer</h1></footer>
-
-- View Example 12 -
-
-What is wrong there?
-Why are Navigation and Aside showing up as children, albeit we marked up every element with headings of the same rank?
-The reason for this is, that nav and aside are sectioning elements:
-
-<!DOCTYPE html>
-<title>Example 13</title>
-<header>
- Page
- <nav>Navigation</nav>
-</header>
-<main>Main</main>
-<aside>Aside</aside>
-<footer>Footer</footer>
-
-- View Example 13 -
-
-The HTML5 spec defines four sectioning elements: article, section, nav and aside!
-Some explain the confusion about this fact with the constantly evolving standard, that leads to structurally unclear specifications.
-I will be frank:
-I cannot imagine any good reason for this decision!
-In my opinion, the concept would be much clearer, if article and section would be the only two sectioning elements and nav and aside would only define the logical structure of the page, like header and footer.
-
-Knowing, that nav and aside will define sections, we now can complete our outline skillfully avoiding the appearance of untitled sections:
-
-<!DOCTYPE html>
-<title>Example 14</title>
-<header>
- <h2>Page</h2>
- <section>
- <h3>Header</h3>
- <nav><h4>Navigation</h4></nav>
- </section>
-</header>
-<main>
- <article>
- <h1>Main</h1>
- <section>
- <h2>Section I</h2>
- </section>
- <section>
- <h2>Section II</h2>
- <section>
- <h3>Subsection a</h3>
- </section>
- <section>
- <h3>Subsection b</h3>
- </section>
- </section>
- <section>
- <h2>Section III</h2>
- <section>
- <h3>Subsection a</h3>
- </section>
- </section>
- </article>
-</main>
-<aside><h3>Aside</h3></aside>
-<footer>
- <section>
- <h3>Footer</h3>
- </section>
-</footer>
-
-- View Example 14 -
--Et voilà : Our Perfect Outline! -
--If you memorize the concepts, that you have learned in this little tutorial, you should now be able to mark up your documents to generate your perfect outline... -
--...but: one last word about headings: -
--It is crucial to note, that the new outline-algorithm still is a fiction: most user agents do not implement the algorithm yet. -Hence, you still should stick to the old hints for keeping your content accessible and point out the most important heading to the search engines. -
--But there is no reason, not to apply the new possibilities shown in this article to your markup: it will only make it more feature-proof. -It is very likely, that search engines will start to adopt the HTML5 outline algorithm, to make more sense out of your content in near feature - or are already doing so... -So, why not be one of the first, to gain from that new technique. -
--I would advise you, to adopt the new possibilities to section your content and generate a sensible outline, while still keeping the old heading ranks to be backward compatible. -
-]]>-If you just want to enable your spring-based webapplication to let users log in with their social accounts, without changing anything else, pac4j should be your first choice. -But the provided example only shows, how to define all authentication mechanisms via pac4j. -If you already have set up your log-in via spring-security, you have to reconfigure it with the appropriate pac4j-mechanism. -That is a lot of unnecessary work, if you just want to supplement the already configured log in with the additionally possibility, to log in via a social provider. -
--In this short article, I will show you, how to set that up along with the normal form-based login of Spring-Security. -I will show this for a Login via Facabook along the Form-Login of Spring-Security. -The method should work as well for other social logins, that are supported by spring-security-pac4j, along other login-mechanisms provided by spring-security out-of-the-box. -
--In this article I will not explain, how to store the user-profile-data, that was retrieved during the social login. -Also, if you need more social interaction, than just a login and access to the default data in the user-profile you probably need spring-social. How to combine spring-social with spring-security for that purpose, is explained in this nice article about how to add social sign in to a spring-mvc weba-pplication. -
--In order to use spring-security-pac4j to login to facebook, you need the following maven-artifacts: -
-
-<dependency>
- <groupId>org.pac4j</groupId>
- <artifactId>spring-security-pac4j</artifactId>
- <version>1.2.5</version>
-</dependency>
-<dependency>
- <groupId>org.pac4j</groupId>
- <artifactId>pac4j-http</artifactId>
- <version>1.7.1</version>
-</dependency>
-<dependency>
- <groupId>org.pac4j</groupId>
- <artifactId>pac4j-oauth</artifactId>
- <version>1.7.1</version>
-</dependency>
-
-
--This is a bare minimal configuration to get the form-login via Spring-Security working: -
-
-<?xml version="1.0" encoding="UTF-8"?>
-<beans
- xmlns="http://www.springframework.org/schema/beans"
- xmlns:security="http://www.springframework.org/schema/security"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="
- http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
- http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.2.xsd
- ">
-
- <security:http use-expressions="true">
- <security:intercept-url pattern="/**" access="permitAll"/>
- <security:intercept-url pattern="/home.html" access="isAuthenticated()"/>
- <security:form-login login-page="/login.html" authentication-failure-url="/login.html?failure"/>
- <security:logout/>
- <security:remember-me/>
- </security:http>
-
- <security:authentication-manager>
- <security:authentication-provider>
- <security:user-service>
- <security:user name="user" password="user" authorities="ROLE_USER" />
- </security:user-service>
- </security:authentication-provider>
- </security:authentication-manager>
-
-</beans>
-
-
-The http defines, that the access to the url /home.html is restriced and must be authenticated via a form-login on url /login.html.
-The authentication-manager defines an in-memory authentication-provider for testing purposes with just one user (username: user, password: user).
-For more details, see the documentation of spring-security.
-
-To enable pac4j alongside, you have to add/change the following: -
-
-<?xml version="1.0" encoding="UTF-8"?>
-<beans
- xmlns="http://www.springframework.org/schema/beans"
- xmlns:security="http://www.springframework.org/schema/security"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="
- http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
- http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.2.xsd
- ">
-
- <security:http use-expressions="true">
- <security:custom-filter position="OPENID_FILTER" ref="clientFilter"/>
- <security:intercept-url pattern="/**" access="permitAll()"/>
- <security:intercept-url pattern="/home.html" access="isAuthenticated()"/>
- <security:form-login login-page="/login.html" authentication-failure-url="/login.html?failure"/>
- <security:logout/>
- </security:http>
-
- <security:authentication-manager alias="authenticationManager">
- <security:authentication-provider>
- <security:user-service>
- <security:user name="user" password="user" authorities="ROLE_USER" />
- </security:user-service>
- </security:authentication-provider>
- <security:authentication-provider ref="clientProvider"/>
- </security:authentication-manager>
-
- <!-- entry points -->
- <bean id="facebookEntryPoint" class="org.pac4j.springframework.security.web.ClientAuthenticationEntryPoint">
- <property name="client" ref="facebookClient"/>
- </bean>
-
- <!-- client definitions -->
- <bean id="facebookClient" class="org.pac4j.oauth.client.FacebookClient">
- <property name="key" value="145278422258960"/>
- <property name="secret" value="be21409ba8f39b5dae2a7de525484da8"/>
- </bean>
- <bean id="clients" class="org.pac4j.core.client.Clients">
- <property name="callbackUrl" value="http://localhost:8080/callback"/>
- <property name="clients">
- <list>
- <ref bean="facebookClient"/>
- </list>
- </property>
- </bean>
-
- <!-- common to all clients -->
- <bean id="clientFilter" class="org.pac4j.springframework.security.web.ClientAuthenticationFilter">
- <constructor-arg value="/callback"/>
- <property name="clients" ref="clients"/>
- <property name="sessionAuthenticationStrategy" ref="sas"/>
- <property name="authenticationManager" ref="authenticationManager"/>
- </bean>
- <bean id="clientProvider" class="org.pac4j.springframework.security.authentication.ClientAuthenticationProvider">
- <property name="clients" ref="clients"/>
- </bean>
- <bean id="httpSessionRequestCache" class="org.springframework.security.web.savedrequest.HttpSessionRequestCache"/>
- <bean id="sas" class="org.springframework.security.web.authentication.session.SessionFixationProtectionStrategy"/>
-
-</beans>
-
--In short: -
-http.
-I added this filter on position OPENID_FILTER, because pac4j introduces a unified way to handle OpenID and OAuth and so on.
-If you are using the OpenID-mechanism of spring-security, you have to use another position in the filter-chain (for example CAS_FILTER) or reconfigure OpenID to use the pac4j-mechanism, which should be fairly straight-forward.
-clientFilter and needs a reference to the authenticationManager.
-Also, the callback-URL (here: /callback) must be mapped to your web-application!
-authentication-provider to the authentication-manager, that references your newly defined pac4j-ClientProvider (clientProvider).
--That should be all, that is necessary, to enable a Facebook-Login in your Spring-Security web-application. -
-
-The App-ID 145278422258960 and the accompanying secret be21409ba8f39b5dae2a7de525484da8 were taken from the spring-security-pac4j example for simplicity.
-That works for a first test-run on localhost.
-But you have to replace that with your own App-ID and -scecret, that you have to generate using your App Dashboard on Facebook!
-
-This short article does not show, how to save the retrieved user-profiles in your user-database, if you need that.
-I hope, I will write a follow-up on that soon.
-In short:
-pac4j creates a Spring-Security UserDetails-Instance for every user, that was authenticated against it.
-You can use this, to access the data in the retrieved user-profile (for example to write out the name of the user in a greeting or contact him via e-mail).
-
-Are you ever stumbled accross weired errors with font-files, that could not be loaded, or SVG-graphics, that are not shown during local development on your machine using file:///-URI's, though everything works as expected, if you push the content to a webserver and access it via HTTP?
-Furthermore, the browsers behave very differently here.
-Firefox, for example, just states, that the download of the font failed:
-
-
-downloadable font: download failed (font-family: "XYZ" style:normal weight:normal stretch:normal src index:0): status=2147500037 source: file:///home/you/path/to/font/xyz.woff
-
--Meanwhile, Chrome just happily uses the same font. -Considering the SVG-graphics, that are not shown, Firefox just does not show them, like it would not be able to at all. -Chrome logs an error: -
-
-
-Unsafe attempt to load URL file:///home/you/path/to/project/img/sprite.svg#logo from frame with URL file:///home/you/path/to/project/templates/layout.html. Domains, protocols and ports must match
-
--...though, no protocol, domain or port is involved. -
- --The reason for this strange behavior is the Same-origin policy. -Chrome gives you a hint in this direction with the remark that something does not match. -I found the trail, that lead me to this explanation, while googling for the strange error message, that Firefox gives for the fonts, that can not be loaded. -
-- -The Same-origin policy forbids, that locally stored files can access any data, that is stored in a parent-directory. -They only have access to files, that reside in the same directory or in a directory beneath it. - -
--You can read more about that rule on MDN. -
-
-I often violate that rule, when developing templates for dynamically rendered pages with Thymeleaf, or similar techniques.
-That is, because I like to place the template-files on a subdirectory of the directory, that contains my webapp (src/main/webapp with Maven):
-
-
-+ src/main/webapp/
- + css/
- + img/
- + fonts/
- + thymeleaf/templates/
-
-
--I packed a simple example-project for developing static templates with LESS, nodejs and grunt, that shows the problem and the quick solution for Firefox presented later. -You can browse it on my juplo.de/gitweb, or clone it with: -
-
-
-git clone http://juplo.de/git/examples/template-development
-
-file:///-URI's during development.
-The only real solution is, to access your files through the HTTP-protocol, like in production.
-If you do not want to do that, the only two cross-browser solutions are, to
-
--The only real cross-browser solution is to circumvent the problem altogether and serve the content with a local webserver, so that you can access it through HTTP, like in production. -You can read how to extend the example-project mentioned above to achieve that goal in a follow up article. -
--Turning of the Same-origin policy is not recommended. -I would only do that, if you only use your browser, to access the HTML-files under development ‐ which I doubt, that it is the case. -Anyway, this is a good quick test to validate, that the Same-origin policy is the source of your problems ‐ if you quickly re-enable it after the validation. -
-security.fileuri.strict_origin_policy to false on the about:config-page.
- --disable-web-security or --allow-file-access-from-files (for more, see this question on Stackoverflow).
- -If you develop with Firefox, there is a quick fix, to bypass the Same-origin policy for local files. -
--As the explanation on MDM stats, a file loaded in a frame shares the same origin as the file, that contains the frameset. -This can be used to bypass the policy, if you place a file with a frameset in the topmost directory of your development-folder and load the template under development through that file. -
--In my case, the frameset-file looks like this: -
-
-
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd">
-<html>
- <head>
- <meta http-equiv="content-type" content="text/html; charset=utf-8">
- <title>Frameset to Bypass Same-Origin-Policy
- </head>
- <frameset>
- <frame src="thymeleaf/templates/layout.html">
- </frameset>
-</html>
-]]>-Nowadays, frontend-development is mostly done with Nodjs and Grunt. -On npm, there are plenty of useful plugin's, that ease the development of HTML and CSS. -For example grunt-contrib-less to automate the compilation of LESS-sourcecode to CSS, or grunt-svgstore to pack several SVG-graphics in a single SVG-sprite. -
--Because of that, I decided to switch to Nodejs and Grunt to develop the HTML- and CSS-Markup for the templates, that I need for my Spring/Thymeleaf-Applications. -But as with everything new, I had some hard work, to plug together what I needed. -In this article I want to share, how I have set up a really minimalistic, but powerful development-environment for static HTML-templates, that suites all of my initial needs. -
--This might not be the best solutions, but it is a good starting point for beginners like me and it is here to be improved through your feedback! -
--You can browse the example-development-environment on juplo.de/gitweb, or clone it with: -
-
-git clone http://juplo.de/git/examples/template-development
-
--After installing npm you have to fetch the dependencies with: -
-
-
-npm install
-
--Than you can fire up a build with: -
-
-
-grunt
-
--...or start a webserver for development with: -
-
-git run-server
-
-
-
-The hardest part while putting together the development-environment was my need to automatically build the static HTML and CSS after file-changes and serve them via a local webserver.
-As I wrote in an earlier article, I often stumble over problems, that arise from the Same-origin policy when accessing the files locally through file:///-URI's).
-
-I was a bit surprised, that I could not find a simple explanation, how to set up a grunt-task to build the project automatically on file-changes and serve the generated HTML and CSS locally. -That is the main reason, why I am writing this explanation now, in order to fill that gap ;) -
--I realised that goal by implemnting a grunt-task, that spawn's a process that uses the http-server to serve up the files and combine that task with a common watch-task: -
-
-
-grunt.registerTask('http-server', function() {
-
- grunt.util.spawn({
- cmd: 'node_modules/http-server/bin/http-server',
- args: [ 'dist' ],
- opts: { stdio: 'inherit' }
- });
-
-});
-
-grunt.registerTask('run-server', [ 'default', 'http-server', 'watch' ]);
-
--The rest of the configuration is really pretty self-explaining. -I just put together the pieces I needed for my template development (copy some static HTML and generate CSS from the LESS-sources) and configured grunt-contrib-watch to rebuild the project automatically, if anything changes. -
-
-The result is put under dist/ and is ready to be included in my Spring/Thymeleaf-Application as it is.
-
-As I already wrote in a previous article, frontend-development is mostly done with Nodjs and Grunt nowadays. -As I am planing to base the frontend of my next Spring-Application on Bootstrap, I was looking for a way to integrate my backend, which is build using Spring and Thymeleaf and managed with Maven, with a frontend, which is based on Bootstrap and, hence, build with Nodjs and Grunt. -
--As I found out, one can integrate a npm-based build into a maven project with the help of the frontend-maven-plugin. -This plugin automates the managment of Nodjs and its libraries and ensures that the version of Node and NPM being run is the same in every build environment. -As a backend-developer, you do not have to install any of the frontend-tools manualy. -Because of that, this plugin is ideal to integrate a separately developed frontend into a maven-build, without bothering the backend-developers with details of the frontend-build-process. -
-
-The drawback with this approach is, that the backend- and the frontend-project are tightly coupled.
-You can configure the frontend-maven-plugin to use a separate subdirectory as working-directory (for example src/main/frontend) and utilize this to separate the frontend-project in its own repository (for example by using the submodule-functions of git).
-But the grunt-tasks, that you call in the frontend-project through the frontend-maven-plugin, must be defined in that project.
-
-Since I am planing to integrate a ‐ slightly modified ‐ version of Bootstrap as frontend into my project, that would mean that I have to mess around with the configuration of the Bootstrap-project a lot. -But that is not a very good idea, because it hinders upgrades of the Bootstrap-base, because merge-conflicts became more and more likely. -
-
-So, I decided to program a special Gruntfile.js, that resides in the base-folder of my Maven-project and lets me redefine and call tasks of a separated frontend-project in a subdirectory.
-
-As it turned out, there are several npm-plugins for managing and building sub-projects (like grunt-subgrunt or grunt-recurse) or including existing Gruntfiles from sub-projects (like grunt-load-gruntfile), but none of them lets you redefine tasks of the subproject before calling them. -
--I programmed a simple Gruntfile, that lets you do exactly this: -
-
-
-module.exports = function(grunt) {
-
- grunt.loadNpmTasks('grunt-newer');
-
- grunt.registerTask('frontend','Build HTML & CSS for Frontend', function() {
- var
- done = this.async(),
- path = './src/main/frontend';
-
- grunt.util.spawn({
- cmd: 'npm',
- args: ['install'],
- opts: { cwd: path, stdio: 'inherit' }
- }, function (err, result, code) {
- if (err || code > 0) {
- grunt.fail.warn('Failed installing node modules in "' + path + '".');
- }
- else {
- grunt.log.ok('Installed node modules in "' + path + '".');
- }
-
- process.chdir(path);
- require(path + '/Gruntfile.js')(grunt);
- grunt.task.run('newer:copy');
- grunt.task.run('newer:less');
- grunt.task.run('newer:svgstore');
-
- done();
- });
- });
-
-
- grunt.registerTask('default', [ 'frontend' ]);
-
-};
-
-
-This Gruntfile loads the npm-taks grunt-newer.
-Then, it registers a grunt-task called frontend, that loads the dependencies of the specified sub-project, read in its Gruntfile and runs redefined versions of the tasks copy, less and svgstore, which are defined in the sub-project.
-The sub-project itself does not register grunt-newer itself.
-This is done in this parent-project, to demonstrate how to register additional grunt-plugins and redefine tasks of the sub-project without touching it at all.
-
-The separated frontend-project can be used by the frontend-team to develop the temlates, needed by the backend-developers, without any knowledge of the maven-project. -The frontend-project is then included into the backend, which is managed by maven, and can be used by the backend-developers without the need to know anything about the techniques that were used to develop the templates. -
--The whole example can be browsed at juplo.de/gitweb or cloned with: -
-
-
-git clone http://juplo.de/git/examples/maven-grunt-integration
-
-
-Be sure to checkout the tag 2.0.0 for the corresponding version after the cloning, in case i add more commits to demonstrate other stuff.
-Also, you have to init and clone the submodule after checkout:
-
-
-git submodule init
-git submodule update
-
-
-If you run mvn jetty:run, you will notice, that the frontend-maven-plugin will automatically download Nodejs into a the folder node of the parent-project.
-Afterwards, the dependencies of the parent-project are downloaded in the folder node_modules of the parent-project and the dpendencies of the sub-project are downloaded in the folder src/main/frontend/node_modules and the sub-project is build automatically in the folder src/main/frontend/dist, which is included into the directory-tree that is served by the jetty-maven-plugin.
-
-The sub-project is fully usable standalone to drive the development of the frontend separately. -You can read more about it in this previous article. -
--In this article, I showed how to integrate a separately developed frontend-project into a backend-project managed by Maven. -This enables you to separate the development of the layout and the logic of a classic ROCA-project nearly totally. -
]]>java.lang.Exception: Method XZY should have no parameters
-
--Here is the quick and easy fix for it: -Fix the ordering of the dependencies in your pom.xml. -The dependency for JMockit has to come first! -
- - - - -- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
spring-boot:run
-
-A lot of people seem to have problems with hot reloading of static HTML-ressources when developing a Spring-Boot application that uses Thymeleaf as templateing engine with spring-boot:run.
-There are a lot of tips out there, how to fix that problem:
-
spring.thymeleaf.cache=false in your application-configuration in src/main/resources/application.properties.spring.template.cache=false and spring.thymeleaf.cache=false and/or run the application in debugging mode.org.springframework:springloaded to the configuration of the spring-boot-maven-plugin.-But none of that fixes worked for me. -Some may work, if I would switch my IDE (I am using Netbeans), but I have not tested that, because I am not willing to switch my beloved IDE because of that issue. -
-src/main/webapp
-Fortunatly, I found a simple solution, to fix the issue without all the above stuff.
-You simply have to move your Thymeleaf-Templates back to where they belong (IMHO): src/main/webapp and turn of the caching.
-It is not necessary to run the application in debugging mode and/or from your IDE, nor is it necessary to add the dependency to springloaded or more configuration-switches.
-
-To move the templates and disable caching, just add the following to your application configuration in src/main/application.properties:
-
spring.thymeleaf.prefix=/thymeleaf/
-spring.thymeleaf.cache=false
-
-
-Of course, you also have to move your Thymeaf-Templates from src/main/resources/templates/ to src/main/webapp/thymeleaf/.
-In my opinion, the templates belong there anyway, in order to have them accessible as normal static HTML(5)-files.
-If they are locked away in the classpath you cannot access them, which foils the approach of Thymeleaf, that you can view your templates in a browser as thy are.
-
- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-In its default configuration Jackson adjusts the time-zone of a ZonedDateTime to the time-zone of the local context.
-As, by default, the time-zone of the local context is not set and has to be configured manually, Jackson adjusts the time-zone to GMT.
-
-This behavior is very unintuitive and not well documented.
-It looks like Jackson just loses the time-zone during deserialization and, if you serialize and deserialize a ZonedDateTime, the result will not equal the original instance, because it has a different time-zone.
-
-Fortunately, there is a quick and simple fix for this odd default-behavior: you just have to tell Jackson, not to adjust the time-zone. -Tis can be done with this line of code: -
-mapper.disable(DeserializationFeature.ADJUST_DATES_TO_CONTEXT_TIME_ZONE);
-
-
-
-- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-The goal of this series is not, to show how simple it is to set up your first social app with Spring Social. -Even though the usual getting-started guides, like the one this series is based on, are really simple at first glance, they IMHO tend to be confusing, if you try to move on. -I started with the example from the original Getting-Started guide "Accessing Facebook Data" and planed to extend it to handle a sign-in via the canvas-page of facebook, like in the Spring Social Canvas-Example. -But I was not able to achieve that simple refinement and ran into multiple obstacles. -
--Because of that, I wanted to show the refinement-process from a simple example up to a full-fledged facebook-app. -My goal is, that you should be able to reuse the final result of the last part of this series as blueprint and starting-point for your own project. -At the same time, you should be able to jump back to earlier posts and read all about the design-decisions, that lead up to that result. -
--This part of my series will handle the preconditions of our first real development-steps. -
--The source-code can be found on http://juplo.de/git/examples/facebook-app/ -and browsed via gitweb. -For every part I will add a corresponding tag, that denotes the differences between the earlier and the later development steps. -
--We will start with the most simple app possible, that just displays the public profile data of the logged in user. -This app is based on the code of the original Getting-Started guide "Accessing Facebook Data" from Spring-Social. -
-
-But it is simplified and cleand a little.
-And I fixed some small bugs: the original code from
-https://github.com/spring-guides/gs-accessing-facebook.git
- produces a
-NullPointerException and won't work with the current version 2.0.3.RELEASE of spring-social-facebook, because it uses the depreceated scope read_stream.
-
-The code for this.logging.level.de.juplo.yourshouter= part is tagged with part-00.
-Appart from the HTML-templates, the attic for spring-boot and the build-definitions in the pom.xml it mainly consists of one file:
-
@Controller
-@RequestMapping("/")
-public class HomeController
-{
- private final static Logger LOG = LoggerFactory.getLogger(HomeController.class);
-
-
- private final Facebook facebook;
-
-
- @Inject.logging.level.de.juplo.yourshouter=
- public HomeController(Facebook facebook)
- {
- this.facebook = facebook;
- }
-
-
- @RequestMapping(method = RequestMethod.GET)
- public String helloFacebook(Model model)
- {
- boolean authorized = true;
- try
- {
- authorized = facebook.isAuthorized();
- }
- catch (NullPointerException e)
- {
- LOG.debug("NPE while acessing Facebook: {}", e);
- authorized = false;
- }
- if (!authorized)
- {
- LOG.info("no authorized user, redirecting to /connect/facebook");
- return "redirect:/connect/facebook";
- }
-
- User user = facebook.userOperations().getUserProfile();
- LOG.info("authorized user {}, id: {}", user.getName(), user.getId());
- model.addAttribute("user", user);
- return "home";
- }
-}
-
--I removed every unnecessary bit, to clear the view for the relevant part. -You can add your styling and stuff by yourself later... -
--The magic of Spring-Social is hidden in the autoconfiguration of Spring-Boot, which will be revealed and refined/replaced in the next parts of this series. -
--You can clone the repository, checkout the right version and run it with the following commands: -
-git clone http://juplo.de/git/examples/facebook-app/
-cd facebook-app
-checkout part-00
-mvn spring-boot:run \
- -Dfacebook.app.id=YOUR_ID \
- -Dfacebook.app.secret=YOUR_SECRET
-
-
-Of course, you have to replace YOUR_ID and YOUR_SECRET with the ID and secret of your Facebook-App.
-What you have to do to register as a facebook-developer and start your first facebook-app is described in this "Getting Started"-guide from Spring-Social.
-
-In addition to what is described there, you have to configure the URL of your website.
-To do so, you have to navigate to the Settings-panel of your newly registered facebook-app.
-Click on Add Platform and choose Website.
-Then, enter http://localhost:8080/ as the URL of your website.
-
-After maven has downloaded all dependencies and started the Spring-Boot application in the embedded tomcat, you can point your browser to http://localhost:8080, connect, go back to the welcome-page and view the public data of the account you connected with your app. -
--Now, you are prepared to learn Spring-Social and develop your first app step by step. -I will guide you through the process in the upcoming parts of this series. -
--In the next part of this series I will explain, why this example from the "Getting Started"-guide would not work as a real application and what has to be done, to fix that. -
- - - -- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-In the last and first part of this series, I prepared you for our little course. -
--In this part we will take a look behind the scenes and learn more about the autoconfiguration performed by Spring-Boot, which made our first small example so automagically. -
-
-You can find the source-code on http://juplo.de/git/examples/facebook-app/
-and browse it via gitweb.
-Check out part-01 to get the source for this part of the series.
-
-While looking at our simple example from the last part of this series, you may have wondered, how all this is wired up. -You can log in a user from facebook, access his public profile and all this without one line of configuration. -
-- -This is achieved via Spring-Boot autoconfiguration. - -
--What comes in very handy in the beginning, sometimes get's in your way, when your project grows. -This may happen, because these parts of the code are not under your control and you do not know what the autoconfiguration is doing on your behalf. -Because of that, in this part of our series, we will rebuild the most relevant parts of the configuration by hand. -As you will see later, this is not only an exercise, but will lead us to the first improvement of our little example. -
--In our case, two Spring-Boot configuration-classes are defining the configuration. -These two classes are SocialWebAutoConfiguration and FacebookAutoConfiguration. -Both classes are located in the package spring-boot-autoconfigure. -
-
-The first one configures the ConnectController, sets up an instance of InMemoryUsersConnectionRepository as persitent store for user/connection-mappings and sets up a UserIdService on our behalf, that always returns the user-id anonymous.
-
-The second one adds an instance of FacebookConnectionFactory to the list of available connection-factories, if the required properties (spring.social.facebook.appId and spring.social.facebook.appSecret) are available.
-It also configures, that a request-scoped bean of the type Connection<Facebook> is created for each request, that has a known user, who is connected to the Graph-API.
-
-The following class rebuilds the same configuration explicitly: -
-@Configuration
-@EnableSocial
-public class SocialConfig extends SocialConfigurerAdapter
-{
- /**
- * Add a {@link FacebookConnectionFactory} to the configuration.
- * The factory is configured through the keys <code>facebook.app.id</code>
- * and <,code>facebook.app.secret</code>.
- *
- * @param config
- * @param env
- */
- @Override
- public void addConnectionFactories(
- ConnectionFactoryConfigurer config,
- Environment env
- )
- {
- config.addConnectionFactory(
- new FacebookConnectionFactory(
- env.getProperty("facebook.app.id"),
- env.getProperty("facebook.app.secret")
- )
- );
- }
-
- /**
- * Configure an instance of {@link InMemoryUsersConnection} as persistent
- * store of user/connection-mappings.
- *
- * At the moment, no special configuration is needed.
- *
- * @param connectionFactoryLocator
- * The {@link ConnectionFactoryLocator} will be injected by Spring.
- * @return
- * The configured {@link UsersConnectionRepository}.
- */
- @Override
- public UsersConnectionRepository getUsersConnectionRepository(
- ConnectionFactoryLocator connectionFactoryLocator
- )
- {
- InMemoryUsersConnectionRepository repository =
- new InMemoryUsersConnectionRepository(connectionFactoryLocator);
- return repository;
- }
-
- /**
- * Configure a {@link UserIdSource}, that is equivalent to the one, that is
- * created by Spring-Boot.
- *
- * @return
- * An instance of {@link AnonymousUserIdSource}.
- *
- * @see {@link AnonymousUserIdSource}
- */
- @Override
- public UserIdSource getUserIdSource()
- {
- return new AnonymousUserIdSource();
- }
-
-
- /**
- * Configuration of the controller, that handles the authorization against
- * the Facebook-API, to connect a user to Facebook.
- *
- * At the moment, no special configuration is needed.
- *
- * @param factoryLocator
- * The {@link ConnectionFactoryLocator} will be injected by Spring.
- * @param repository
- * The {@link ConnectionRepository} will be injected by Spring.
- * @return
- * The configured controller.
- */
- @Bean
- public ConnectController connectController(
- ConnectionFactoryLocator factoryLocator,
- ConnectionRepository repository
- )
- {
- ConnectController controller =
- new ConnectController(factoryLocator, repository);
- return controller;
- }
-
- /**
- * Configure a scoped bean named <code>facebook</code>, that enables
- * access to the Graph-API in the name of the current user.
- *
- * @param repository
- * The {@link ConnectionRepository} will be injected by Spring.
- * @return
- * A {@Connection}, that represents the authorization of the
- * current user against the Graph-API, or null, if the
- * current user is not connected to the API.
- */
- @Bean
- @Scope(value = "request", proxyMode = ScopedProxyMode.INTERFACES)
- public Facebook facebook(ConnectionRepository repository)
- {
- Connection connection =
- repository.findPrimaryConnection(Facebook.class);
- return connection != null ? connection.getApi() : null;
- }
-}
-
--If you run this refined version of our app, you will see, that it behaves in exact the same way, as the initial version. -
--You may ask, why we should rebuild the configuration by hand, if it does the same thing. -This is, because the example, so far, would not work as a real app. -The first step, to refine it, is to take control of the configuration. -
--In the next part of this series, I will show you, why this is necessary. -But, first, we have to take a short look into Spring Social. -
- - - -- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-In the last part of this series, we took control of the autoconfiguration, that Spring Boot had put in place for us. -But there is still a lot of magic in our little example, that was borrowed from the offical "Getting Started"-guides or at least, it looks so. -
--When I first run the example, I wondered like "Wow, how does this little piece of code figures out which data to fetch? How is Spring Social told, which data to fetch? That must be stored in the session, or so! But where is that configured?" and so on and so on. -
--When we connect to Facebook, Facebook tells Spring Social, which user is logged in and if this user authorizes the requested access. -We get an access-token from facebook, that can be used to retrieve user-related data from the Graph-API. -Our application has to manage this data. -
--Spring Social assists us on that task. -But in the end, we have to make the decisions, how to deal with it. -
-
-Spring Social provides the concept of a ConnectionRepository, which is used to persist the connections of specific user.
-Spring Social also provides the concept of a UsersConnectionRepository, which stores, whether a user is connected to a specific social service or not.
-As described in the official documentation, Spring Social uses the UsersConnectionRepository to create a request-scoped ConnectionRepository bean (the bean named facebook in our little example), that is used by us to access the Graph-API.
-
-But to be able to do so, it must know which user we are interested in! -
-
-Hence, Spring Social requires us to configure a UserIdSource.
-Every time, when it prepares a request for us, Spring Social will ask this source, which user we are interested in.
-
-Attentive readers might have noticed, that we have configured such a source, when we were explicitly rebuilding the automatic default-configuration of Spring Boot: -
-public class AnonymousUserIdSource implements UserIdSource
-{
- @Override
- public String getUserId()
- {
- return "anonymous";
- }
-}
-
-
-But what is that?!?
-All the time we are only interested in one and the same user, whose connections should be stored under the key anonymous?
-
-And what will happen, if a second user connects to our app? -
--To see what happens, if more than one user connects to your app, you have to create a test user. -This is very simple. -Just go to the dashboard of your app, select the menu-item "Roles" and click on the tab "Test Users". -Select a test user (or create a new one) and click on the "Edit"-button. -There you can select "Log in as this test user". -
--If you first connect to the app as yourself and afterwards as test user, you will see, that your data is presented to the test user. -
-
-That is, because we are telling Spring Social that every user is called anonymous.
-Hence, every user is the same for Spring Social!
-When the test user fetches the page, after you have connected to Facebook as yourself, Spring-Social is thinking, that the same user is returning and serves your data.
-
-In the next part of this series, we will try to teach Spring Social to distinguish between several users. -
- - - -- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-In the last part of this series, I explained, why the nice little example from the Getting-Started-Guide "Accessing Facebook Data" cannot function as a real facebook-app. -
-
-In this part, we will try to solve that problem, by implementing a UserIdSource, that tells Spring Social, which user it should connect to the API.
-
-You can find the source-code on http://juplo.de/git/examples/facebook-app/
-and browse it via gitweb.
-Check out part-03 to get the source for this part of the series.
-
UserIdSource
-The UserIdSource is used by Spring Social to ask us, which user it should connect with the social net.
-Clearly, to answer that question, we must remeber, which user we are currently interested in!
-
-In order to remember the current user, we implement a simple mechanism, that stores the ID of the current user in a cookie and retrieves it from there for subsequent calls. -This concept was borrowed — again — from the official code examples. -You can find it for example in the quickstart-example. -
--It is crucial to stress, that this concept is inherently insecure and should never be used in a production-environment. -As the ID of the user is stored in a cookie, an attacker could simply take over control by sending the ID of any currently connected user, he is interested in. -
--The concept is implemented here only for educational purposes. -It will be replaced by Spring Security later on. -But for the beginning, it is easier to understand, how Spring Social works, if we implement a simple version of the mechanism ourself. -
--The internals of our implementation are not of interest. -You may explore them by yourself. -In short, it stores the ID of each new user in a cookie. -By inspecting that cookie, it can restore the ID of the user on subsequent calls. -
--What is from interest here is, how we can plug in this simple example-mechanism in Spring Social. -
--Mainly, there are two hooks to do that, that means: two interfaces, we have to implement: -
-
-The implementation of ConnectionSignUp simply uses the ID, that is provided by the social network.
-Since we are only signing in users from Facebook, these ID's are guaranteed to be unique.
-
public class ProviderUserIdConnectionSignUp implements ConnectionSignUp
-{
- @Override
- public String execute(Connection> connection)
- {
- return connection.getKey().getProviderUserId();
- }
-}
-
-
-The implementation of UserIdSource retrieves the ID, that was stored in the SecurityContext (our simple implementation — not to be confused with the class from Spring Security).
-If no user is stored in the SecurityContext, it falls back to the old behavior and returns the fix id anonymous.
-
public class SecurityContextUserIdSource implements UserIdSource
-{
- private final static Logger LOG =
- LoggerFactory.getLogger(SecurityContextUserIdSource.class);
-
-
- @Override
- public String getUserId()
- {
- String user = SecurityContext.getCurrentUser();
- if (user != null)
- {
- LOG.debug("found user \"{}\" in the security-context", user);
- }
- else
- {
- LOG.info("found no user in the security-context, using \"anonymous\"");
- user = "anonymous";
- }
- return user;
- }
-}
-
-
-To replace the AnonymousUserIdSource by our new implementation, we simply instantiate that instead of the old one in our configuration-class SocialConfig:
-
@Override
-public UserIdSource getUserIdSource()
-{
- return new SecurityContextUserIdSource();
-}
-
-
-There are several ways to plug in the ConnectionSignUp.
-I decided, to plug it into the instance of InMemoryUsersConnectionRepository, that our configuration uses, because this way, the user will be signed up automatically on sign in, if it is not known to the application:
-
@Override
-public UsersConnectionRepository getUsersConnectionRepository(
- ConnectionFactoryLocator connectionFactoryLocator
- )
-{
- InMemoryUsersConnectionRepository repository =
- new InMemoryUsersConnectionRepository(connectionFactoryLocator);
- repository.setConnectionSignUp(new ProviderUserIdConnectionSignUp());
- return repository;
-}
-
--This makes sense, because our facebook-app uses Facebook, to sign in its users, and, because of that, does not have its own user-model. -It can just reuse the user-data provided by facebook. -
--The other approach would be, to officially sign up users, that are not known to the app. -This is achieved, by redirecting to a special URL, if a sign-in fails, because the user is unknown. -These URL then presents a formular for sign-up, which can be prepopulated with the user-data provided by the social network. -You can read more about this approach in the official documentation. -
--So, let us see, if our refinement works. Run the following command and log into your app with at least two different users: -
-git clone http://juplo.de/git/examples/facebook-app/
-cd facebook-app
-checkout part-00
-mvn spring-boot:run \
- -Dfacebook.app.id=YOUR_ID \
- -Dfacebook.app.secret=YOUR_SECRET \
- -Dlogging.level.de.juplo.yourshouter=debug
-
-
-(The last part of the command turns on the DEBUG logging-level, to see in detail, what is going on.
-
-Unfortunately, our application shows exactly the same behavior as, before our last refinement. -Why that? -
-
-If you run the application in a debugger and put a breakpoint in our implementation of ConnectionSignUp, you will see, that this code is never called.
-But it is plugged in in the right place and should be called, if a new user signs in!
-
-The solution is, that we are using the wrong mechanism.
-We are still using the ConnectController which was configured in the simple example, we extended.
-But this controller is meant to connect a known user to one or more new social services.
-This controller assumes, that the user is already signed in to the application and can be retrieved via the configured UserIdSource.
-
-
-To sign in a user to our application, we have to use the ProviderSignInController instead!
-
-
-In the next part of this series, I will show you, how to change the configuration, so that the ProviderSignInController is used to sign in (and automatically sign up) users, that were authenticated through the Graph-API from Facebook.
-
- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-In the last part of this series, we tried to teach Spring Social how to remember our signed in users and learned, that we have to sign in a user first. -
--In this part, I will show you, how you sign (and automatically sign up) users, that are authenticated via the Graph-API. -
-
-You can find the source-code on http://juplo.de/git/examples/facebook-app/
-and browse it via gitweb.
-Check out part-04 to get the source for this part of the series.
-
-In the last part of our series we ran in the problem, that we wanted to connect several (new) users to our application. -We tried to achieve that, by extending our initial configuration. -But the mistake was, that we tried to connect new users. -In the world of Spring Social we can only connect a known user to a new social service. -
-
-To know a user, Spring Social requires us to sign in that user.
-But again, if you try to sign in a new user, Spring Social requires us to sign up that user first.
-Because of that, we had already implemented a ConnectionSignUp and configured Spring Social to call it, whenever it does not know a user, that was authenticated by Facebook.
-If you forget that (or if you remove the according configuration, that tells Spring Social to use our ConnectionSignUp), Spring Social will redirect you to the URL /signup — a Sign-Up page you have to implement — after a successfull authentication of a user, that Spring Social does not know yet.
-
-The confusion — or, to be honest, my confusion — about sign in and sign up arises from the fact, that we are developing a Facebook-Application. -We do not care about signing up users. -Each user, that is known to Facebook — that is, who has signed up to Facebook — should be able to use our application. -An explicit sign-up to our application is not needed and not wanted. -So, in our use-case, we have to implement the automatically sign-up of new users. -But Spring Social is designed for a much wider range of use cases. -Hence, it has to distinguish between sign-in and sign-up. -
-
-Spring Social provides the interface SignInAdapter, that it calls every time, it has authenticated a user against a social service.
-This enables us, to be aware of that event and remember the user for subsequent calls.
-Our implementation stores the user in our SecurityContext to sign him in and creates a cookie to remember him for subsequent calls:
-
public class UserCookieSignInAdapter implements SignInAdapter
-{
- private final static Logger LOG =
- LoggerFactory.getLogger(UserCookieSignInAdapter.class);
-
-
- @Override
- public String signIn(
- String user,
- Connection> connection,
- NativeWebRequest request
- )
- {
- LOG.info(
- "signing in user {} (connected via {})",
- user,
- connection.getKey().getProviderId()
- );
- SecurityContext.setCurrentUser(user);
- UserCookieGenerator
- .INSTANCE
- .addCookie(usSigning In Userser, request.getNativeResponse(HttpServletResponse.class));
-
- return null;
- }
-}
-
-
-It returns null, to indicate, that the user should be redirected to the default-URL after an successful sign-in.
-This URL can be configured in the ProviderSignInController and defaults to /, which matches our use-case.
-If you return a string here, for example /welcome.html, the controller would ignore the configured URL and redirect to that URL after a successful sign-in.
-
-To enable the Sign-In, we have to plug our SignInAdapter into the ProviderSignInController:
-
@Bean
-public ProviderSignInController signInController(
- ConnectionFactoryLocator factoryLocator,
- UsersConnectionRepository repository
- )
-{
- ProviderSignInController controller = new ProviderSignInController(
- factoryLocator,
- repository,
- new UserCookieSignInAdapter()
- );
- return controller;
-}
-
-
-Since we are using Spring Boot, an alternative configuration would have been to just create a bean-instance of our implementation named signInAdapter.
-Then, the auto-configuration of Spring Boot would discover that bean, create an instance of ProviderSignInController and plug in our implementation for us.
-If you want to learn, how that works, take a look at the implementation of the auto-configuration in the class SocialWebAutoConfiguration, lines 112ff.
-
-If you run our refined example and visit it after impersonating different facebook-users, you will see that everything works as expected now. -If you visit the app for the first time (after a restart) with a new user, the user is signed up and in automatically and a cookie is generated, that stores the Facebook-ID of the user in the browser. -On subsequent calls, his ID is read from this cookie and the corresponding connection is restored from the persistent store by Spring Social. -
-
-In the next part of this little series, we will move the redirect-if-unknown logic from our HomeController into our UserCookieInterceptor, so that the behavior of our so-called "security"-concept more closely resembles the behavior of Spring Security.
-That will ease the migration to that solution in a later step.
-
-Perhaps you want to skip that, rather short and boring step and jump to the part after the next, that explains, how to sign in users by the signed_request, that Facebook sends, if you integrate your app as a canvas-page.
-
- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-In the last part of this series, we reconfigured our app, so that users are signed in after an authentication against Facebook and new users are signed up automatically on the first visit. -
--In this part, we will refactor our redirect-logic for unauthenticated users, so that it more closely resembles the behavior of Spring Social, hence, easing the planed switch to that technology in a feature step. -
-
-You can find the source-code on http://juplo.de/git/examples/facebook-app/
-and browse it via gitweb.
-Check out part-05 to get the source for this part of the series.
-
-To stress that again: our simple authentication-concept is only meant for educational purposes. It is inherently insecure! -We are not refining it here, to make it better or more secure. -We are refining it, so that it can be replaced with Spring Security later on, without a hassle! -
-
-In our current implementation, a user, who is not yet authenticated, would be redirected to our sign-in-page only, if he visits the root of our webapp (/).
-To move all redirect-logic out of HomeController and redirect unauthenicated users from all pages to our sign-in-page, we can simply modify our interceptor UserCookieInterceptor, which already intercepts each and every request.
-
-We refine the method preHandle, so that it redirects every request to our sign-in-page, that is not authenticated:
-
@Override
-public boolean preHandle(
- HttpServletRequest request,
- HttpServletResponse response,
- Object handler
- )
- throws
- Exception
-{
- if (request.getServletPath().startsWith("/signin"))
- return true;
-
- String user = UserCookieGenerator.INSTANCE.readCookieValue(request);
- if (user != null)
- {
- if (!repository
- .findUserIdsConnectedTo("facebook", Collections.singleton(user))
- .isEmpty()
- )
- {
- LOG.info("loading user {} from cookie", user);
- SecurityContext.setCurrentUser(user);
- return true;
- }
- else
- {
- LOG.warn("user {} is not known!", user);
- UserCookieGenerator.INSTANCE.removeCookie(response);
- }
- }
-
- response.sendRedirect("/signin.html");
- return false;
-}
-
-
-If the user, that is identified by the cookie, is not known to Spring Security, we send a redirect to our sign-in-page and flag the request as already handled, by returning false.
-To prevent an endless loop of redirections, we must not redirect request, that were already redirected to our sign-in-page.
-Since these requests hit our webapp as a new request for the different location, we can filter out and wave through at the beginning of this method.
-
-That is all there is to do.
-Run the app and call the page http://localhost:8080/profile.html as first request.
-You will see, that you will be redirected to our sigin-in-page.
-
-As it is now not possible, to call any page except the sigin-up-page, without beeing redirected to our sign-in-page, if you are not authenticated, it is impossible to call any page without being authenticated.
-Hence, we can (and should!) refine our UserIdSource, to throw an exception, if that happens anyway, because it has to be a sign for a bug:
-
public class SecurityContextUserIdSource implements UserIdSource
-{
-
- @Override
- public String getUserId()
- {
- Assert.state(SecurityContext.userSignedIn(), "No user signed in!");
- return SecurityContext.getCurrentUser();
- }
-}
-
--In the next part of this series, we will enable users to sign in through the canvas-page of our app. -The canvas-page is the page that Facebook embeds into its webpage, if we render our app inside of Facebook. -
- - - -- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-In the last part of this series, we refactored our authentication-concept, so that it can be replaced by Spring Security later on more easy. -
-
-In this part, we will turn our app into a real Facebook-App, that is rendered inside Facebook and signs in users through the signed_request.
-
-You can find the source-code on http://juplo.de/git/examples/facebook-app/
-and browse it via gitweb.
-Check out part-06 to get the source for this part of the series.
-
signed_request
-If you add the platform Facebook Canvas to your app, you can present your app inside of Facebook.
-It will be accessible on a URL like https://apps.facebook.com/YOUR_NAMESPACE then and if a (known!) user accesses this URL, facebook will send a signed_request, that already contains some data of this user an an authorization to retrieve more.
-
signed_request In 5 Simple Steps-As I first tried to extend the simple example, this article-series is based on, I stumbled across multiple misunderstandings. -But now, as I guided you around all that obstacles, it is fairly easy to refine our app, so that is can sign in users through the signed_request, send to a Canvas-Page. -
--You just have to: -
-CanvasSignInController.-That is all, there is to do. -But now, step by step... -
--Go to the settings-panel of your app on https://developers.facebook.com/apps and click on Add Platform. -Choose Facebook Canvas. -Pick a secure URL, where your app will serve the canvas-page. -
-
-For example: https://localhost:8443.
-
-Be aware, that the URL has to be publicly available, if you want to enable other users to access your app.
-But that also counts for the Website-URL http://localhost:8080, that we are using already.
-
-Just remember, if other people should be able to access your app later, you have to change these URL's to something, they can access, because all the content of your app is served by you, not by Facebook. -A Canvas-App just embedds your content in an iFrame inside of Facebook. -
-
-Add the following lines to your src/main/resources/application.properties:
-
server.port: 8443
-server.ssl.key-store: keystore
-server.ssl.key-store-password: secret
-
-
-I have included a self-signed keystore with the password secret in the source, that you can use for development and testing.
-But of course, later, you have to create your own keystore with a certificate that is signed by an official certificate authority, that is known by the browsers of your users.
-
-Since your app now listens on 8443 an uses HTTPS, you have to change the URL, that is used for the platform "Website", if you want your sign-in-page to continue to work in parallel to the sign-in through the canvas-page.
-
-For now, you can simply change it to https://locahost:8443/ in the settings-panel of your app.
-
CanvasSignInController
-To actually enable the automatic handling of the signed_request, that is, decoding the signed_request and sign in the user with the data provided in the signed_request, you just have to add the CanvasSignInController as a bean in your SocialConfig:
-
@Bean
-public CanvasSignInController canvasSignInController(
- ConnectionFactoryLocator connectionFactoryLocator,
- UsersConnectionRepository usersConnectionRepository,
- Environment env
- )
-{
- return
- new CanvasSignInController(
- connectionFactoryLocator,
- usersConnectionRepository,
- new UserCookieSignInAdapter(),
- env.getProperty("facebook.app.id"),
- env.getProperty("facebook.app.secret"),
- env.getProperty("facebook.app.canvas")
- );
-}
-
-
-Since we have "secured" all of our pages except of our sign-in-page /signin*, so that they can only be accessed by an authenticated user, we have to explicitly allow unauthenticated access to our new special sign-in-page.
-
-To achieve that, we have to refine our UserCookieInterceptor as follows.
-First add a pattern for all pages, that are allowed to be accessed unauthenticated:
-
private final static Pattern PATTERN = Pattern.compile("^/signin|canvas");
-
-
-Then match the requests against this pattern, instead of the fixed string /signin:
-
if (PATTERN.matcher(request.getServletPath()).find())
- return true;
-
-
-Facebook always sends a signed_request to your app, if a user visits your app through the canvas-page.
-But on the first visit of a user, the signed_request does not authenticate the user.
-In this case, the only data that is presented to your page is the language and locale of the user and his or her age.
-
-Because the data, that is needed to sign in the user, is missing, the CanvasSignInController will issue an explicit authentication-request to the Graph-API via a so called Server-Side Log-In.
-This process includes a redirect to the Login-Dialog of Facebook and then a second redirect back to your app.
-It requires the specification of a full absolute URL to redirect back to.
-
-Since we are configuring the canvas-login-in, we want, that new users are redirected to the canvas-page of our app.
-Hence, you should use the Facebook-URL of your app: https://apps.facebook.com/YOUR_NAMESPACE.
-This will result in a call to your canvas-page with a signed_request, that authenticates the new user, if the user accepts to share the requested data with your app.
-
-Any other page of your app would work as well, but the result would be a call to the stand-alone version of your app (the version of your app that is called the "Website"-platform of your app by Facebook), meaning, that your app is not rendered inside of Facebook.
-Also it requires one more call of your app to the Graph-API to actually sign-in the new user, because Facebook sends the signed_request only the canvas-page of your app.
-
-To specify the URL I have introduced a new attribute facebook.app.canvas that is handed to the CanvasSignInController.
-You can specifiy it, when starting your app:
-
mvn spring-boot:run \
- -Dfacebook.app.id=YOUR_ID \
- -Dfacebook.app.secret=YOUR_SECRET \
- -Dfacebook.app.canvas=https://apps.facebook.com/YOUR_NAMESPACE
-
--Be aware, that this process requires the automatic sign-up of new users, that we enabled in part 3 of this series. -Otherwise, the user would be redirected to the sign-up-page of your application, after he allowed your app to access the requested data. -Obviously, that would be very confusing for the user, so we really nead automati sign-up in this use-case! -
--In the next part of this series, I will show you, how you can debug the calls, that Spring Social makes to the Graph-API, by turning on the debugging of the classes, that process the HTTP-requests and -responses, that your app is making. -
- - - -- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-In the last part of this series, I showed you, how you can sign-in your users through the signed_request, that is send to your canvas-page.
-In this part, I will show you, how to turn on logging of the HTTP-requests, that your app sends to, and the -responses it recieves from the Facebook Graph-API. -
-
-You can find the source-code on http://juplo.de/git/examples/facebook-app/
-and browse it via gitweb.
-Check out part-07 to get the source for this part of the series.
-
-If you are developing your app, you will often wonder, why something does not work as expected. -In this case, it is often very usefull to be able to debug the communitation between your app and the Graph-API. -But since all requests to the Graph-API are secured by SSL you can not simply listen in with tcpdump or wireshark. -
--Fortunately, you can turn on the debugging of the underling classes, that process theses requests, to sidestep this problem. -
-
-In its default-configuration, the Spring Framework will use the HttpURLConnection, which comes with the JDK, as http-client.
-As described in the documentation, some advanced methods are not available, when using HttpURLConnection
-Besides, the package HttpClient, which is part of Apaches HttpComponents is a much more mature, powerful and configurable alternative.
-For example, you easily can plug in connection pooling, to speed up the connection handling, or caching to reduce the amount of requests that go through the wire.
-In production, you should always use this implementation, instead of the default-one, that comes with the JDK.
-
-Hence, we will switch our configuration to use the HttpClient from Apache, before turning on the debug-logging.
-
HttpCompnents To HttpClient
-To siwtch from the default client, that comes with the JDK to Apaches HttpClient, you have to configure an instance of HttpComponentsClientHttpRequestFactory as HttpRequestFactory in your SocialConfig:
-
@Bean
-public HttpComponentsClientHttpRequestFactory requestFactory(Environment env)
-{
- HttpComponentsClientHttpRequestFactory factory =
- new HttpComponentsClientHttpRequestFactory();
- factory.setConnectTimeout(
- Integer.parseInt(env.getProperty("httpclient.timeout.connection"))
- );
- factory.setReadTimeout(
- Integer.parseInt(env.getProperty("httpclient.timeout.read"))
- );
- return factory;
-}
-
-
-To use this configuration, you also have to add the dependency org.apache.httpcomonents:httpclient in your pom.xml.
-
-As you can see, this would also be the right place to enable other specialized configuration-options. -
-
-I configured a short-cut to enable the logging of the HTTP-headers of the communication between the app and the Graph-API.
-Simply run the app with the additionally switch -Dhttpclient.logging.level=DEBUG
-
-If the headers are not enough to answer your questions, you can enable a lot more debugging messages.
-You just have to overwrite the default logging-levels.
-Read the original documentation of HttpClient, for more details.
-
-For example, to enable logging of the headers and the content of all requests, you have to start your app like this: -
-mvn spring-boot:run \
- -Dfacebook.app.id=YOUR_ID \
- -Dfacebook.app.secret=YOUR_SECRET \
- -Dfacebook.app.canvas=https://apps.facebook.com/YOUR_NAMESPACE
- -Dlogging.level.org.apache.http=DEBUG \
- -Dlogging.level.org.apache.http.wire=DEBUG
-
-
-The second switch is necessary, because I defined the default-level ERROR for that logger in our src/main/application.properties, to enable the short-cut for logging only the headers.
-
- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-Releasing a maven-plugin via Maven Central does not work, if you have switched to Java 8.
-This happens, because hidden in the oss-parent, that you have to configure as parent of your project to be able to release it via Sonatype, the maven-javadoc-plugin is configured for you.
-And the version of javadoc, that is shipped with Java 8, by default checks the syntax of the comments and fails, if anything unexpected is seen.
-
-
-Unfortunatly, the special javadoc-tag's, like @goal or @phase, that are needed to configure the maven-plugin, are unexpected for javadoc.
-
-
-As described elswehere, you can easily turn of the linting in the plugins-section of your pom.xml:
-
<plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-javadoc-plugin</artifactId>
- <version>2.7</version>
- <configuration>
- <additionalparam>-Xdoclint:none</additionalparam>
- </configuration>
-</plugin>
-
-
-Another not so well known approach, that I found in a fix for an issue of some project, is, to add the unknown tag's in the configuration of the maven-javadoc-plugin:
-
<plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-javadoc-plugin</artifactId>
- <version>2.7</version>
- <configuration>
- <tags>
- <tag>
- <name>goal</name>
- <placement>a</placement>
- <head>Goal:</head>
- </tag>
- <tag>
- <name>phase</name>
- <placement>a</placement>
- <head>Phase:</head>
- </tag>
- <tag>
- <name>threadSafe</name>
- <placement>a</placement>
- <head>Thread Safe:</head>
- </tag>
- <tag>
- <name>requiresDependencyResolution</name>
- <placement>a</placement>
- <head>Requires Dependency Resolution:</head>
- </tag>
- <tag>
- <name>requiresProject</name>
- <placement>a</placement>
- <head>Requires Project:</head>
- </tag>
- </tags>
- </configuration>
-</plugin>
-
-
-
-
-
-
-- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-During one of our other projects ‐ the development of a vertical search-engine for events and locations, which is funded by the mistery of economy of NRW ‐, we realized, that we were in the need of Hibernate 5 and some of the more sophisticated JPA-configuration-options. -
-
-Unfortunatly ‐ for us ‐ the old releases of this plugin neither support Hibernate 5 nor all configuration options, that are available for use in the META-INF/persistence.xml.
-
-Fortunatly ‐ for you ‐ we decided, that we really need all that and have to integrate it in our little plugin. -
--Due to changes in the way Hibernate has to be configured internally, this release is a nearly complete rewrite. -It was no longer possible, to just use the SchemaExport-Tool to build up the configuration and support all possible configuration-approaches. -Hence, the plugin now builds up the configuration using Services and Registries, like described in the Integration Guide. -
--We also took the opportunity, to simplify the configuration. -Beforehand, the plugin had just used the configuration, that was set up in the class SchemaExport. -This reliefed us from the burden, to understand the configuration internals, but brought up some oddities of the internal implementation of the tool. -It also turned out to be a bad decision in the long run, because some configuration options are hard coded in that class and cannot be changed. -
-
-By building up the whole configuration by hand, it is now possible to implement separate goals for creating and dropping the schema.
-Also, it enables us to add a goal update in one of the next releases.
-Because of all this improvements, you have to revise your configuration, if you want to switch from 1.x to 2.x.
-
-Be warned: this release is no drop-in replacement of the previous releases! -
--While rewirting the plugin, we focused on Hibernate 5, which was not supported by the older releases, because of some of the oddities of the internal implementation of the SchemaExport-tool. -We tried to maintain backward compatibility. -
--You should be able to use the new plugin with Hibernate 5 and also with older versions of Hibernate (we only tested that for Hibernate 4). -Because of that, we dropped the 4 in the name of the plugin! -
-
-We tried to support all possible configuration-approaches, that Hibernate 5 understands.
-Including hard coded XML-mapping-files in the META-INF/persistence.xml, that do not seem to be used very often, but which we needed in one of our own projects.
-
-Therefore, the plugin now understands all (or most of?) the relevant configuration options, that one can specify through a standard JPA-configuration. -The plugin now should work with any configuration, that you drop in from your existing JPA- or Hibernate-projects. -All recognized configuration from the different possible configuration-sources are merged together, considering the configuration-method-precedence, described in the documentation. -
--We hope, we did not make any unhandy assumptions, while designing the merge-process. -Please let us know, if something wents wrong in your projects and you think it is, because we messed it up! -
--commit 64b7446c958efc15daf520c1ca929c6b8d3b8af5 -Author: Kai Moritz- - - - --Date: Tue Mar 8 00:25:50 2016 +0100 - - javadoc hat to be configured multiple times for release:prepare - -commit 1730d92a6da63bdcc81f7a1c9020e73cdc0adc13 -Author: Kai Moritz -Date: Tue Mar 8 00:13:10 2016 +0100 - - Added the special javadoc-tags for maven-plugins to the configuration - -commit 0611db682bc69b80d8567bf9316668a1b6161725 -Author: Kai Moritz -Date: Mon Mar 7 16:01:59 2016 +0100 - - Updated documentation - -commit a275df25c52fdb7b5b4275fcf9a359194f7b9116 -Author: Kai Moritz -Date: Mon Mar 7 17:56:16 2016 +0100 - - Fixed missing menu on generated site: moved template from skin to project - -commit e8263ad80b1651b812618c964fb02f7e5ddf3d7e -Author: Kai Moritz -Date: Mon Mar 7 14:44:53 2016 +0100 - - Turned of doclint, that was introduced in Java 8 - - See: http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html - -commit 62ec2b1b98d5ce144f1ac41815b94293a52e91e6 -Author: Kai Moritz -Date: Tue Dec 22 19:56:41 2015 +0100 - - Fixed ConcurrentModificationException - -commit 9d6e06c972ddda45bf0cd2e6a5e11d8fa319c290 -Author: Kai Moritz -Date: Mon Dec 21 17:01:42 2015 +0100 - - Fixed bug regarding the skipping of unmodified builds - - If a property or class was removed, its value or md5sum stayed in the set - of md5sums, so that each following build (without a clean) was juged as - modified. - -commit dc652540d007799fb23fc11d06186aa5325058db -Author: Kai Moritz -Date: Sun Dec 20 21:06:37 2015 +0100 - - All packages up to the root are checked for annotations - -commit 851ced4e14fefba16b690155b698e7a39670e196 -Author: Kai Moritz -Date: Sun Dec 20 13:32:48 2015 +0100 - - Fixed bug: the execution is no more skipped after a failed build - - After a failed build, further executions of the plugin were skipped, because - the MD5-summs suggested, that nothing is to do because nothing has changed. - Because of that, the MD5-summs are now removed in case of a failure. - -commit 08649780d2cd70f2861298d683aa6b1945d43cda -Author: Kai Moritz -Date: Sat Dec 19 18:02:02 2015 +0100 - - Mappings from JPA-mapping-files are considered - -commit bb8b638714db7fc02acdc1a9032cc43210fe5c0e -Author: Kai Moritz -Date: Sat Dec 19 03:46:49 2015 +0100 - - Fixed minor misconfiguration in integration-test dependency test - - Error because of multiple persistence-units by repeated execution - -commit 3a7590b8862c3be691b05110f423865f6674f6f6 -Author: Kai Moritz -Date: Thu Dec 17 03:10:33 2015 +0100 - - Considering mapping-configuration from persistence.xml and hibernate.cfg.xml - -commit 23668ccaa93bfbc583c1697214bae116bd9f4ef6 -Author: Kai Moritz -Date: Thu Dec 17 02:53:38 2015 +0100 - - Sidestepped bug in Hibernate 5 - -commit 8e5921c9e76b4540f1d4b75e05e338001145ff6d -Author: Kai Moritz -Date: Wed Dec 16 22:09:00 2015 +0100 - - Introduced the goal "drop" - - * Fixed integration-test hibernate4-maven-plugin-envers-sample by adapting - it to the new drop-goal - * Adapted the other integration-tests to the new naming schema for the - create-script - -commit 6dff3bfb0f9ea7a1d0cc56398aaad29e31a17b91 -Author: Kai Moritz -Date: Wed Dec 16 18:08:56 2015 +0100 - - Reworked configuration and the tracking thereof - - * Moved common parameters from CreateMojo to AbstractSchemaMojo - * Reordered parameters into sensible groups - * Renamed the maven-property-names of the parameters - * All configuration-parameters are tracked, not only hibernate-parameters - * Introduced special treatment for some of the plugin-parameters (export - and show) - -commit b316a5b4122c3490047b68e1e4a6df205645aad5 -Author: Kai Moritz -Date: Wed Oct 21 11:49:56 2015 +0200 - - Reworked plugin-configuration: worshipped the DRY-principle - -commit 4940080670944a15916c68fb294e18a6bfef12d5 -Author: Kai Moritz -Date: Fri Oct 16 12:16:30 2015 +0200 - - Refined reimplementation of the plugin for Hibernate 5.x - - Renamed the plugin from hibernate4-maven-plugin to hibernate-maven-plugin, - because the goal is, to support all recent older versions with the new - plugin. - -commit fdda82a6f76deefd10f83da89d7e82054e3c3ecd -Author: Kai Moritz -Date: Wed Oct 21 12:18:29 2015 +0200 - - Integration-Tests are skiped, if "maven.test.skip" is set to true - -commit b971570e28cbdc3b27eca15a7395586bee787446 -Author: Kai Moritz -Date: Tue Sep 8 13:55:43 2015 +0200 - - Updated version of juplo-skin for generation of documentation - -commit 3541cf3742dd066b94365d351a3ca39a35e3d3c8 -Author: Kai Moritz -Date: Tue May 19 21:41:50 2015 +0200 - - Added new configuration sources in documentation about precedence - - -
- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-Recently, I had a lot of trouble, deploying my spring-boot-app as war under Tomcat 8 on Debian Jessie. -The WAR was found and deployed by tomcat, but it was never started. -Browsing the URL of the app resulted in a 404. -And instead of the fancy Spring-Boot ASCII-art banner, the only matching entry that showed up in my log-file was: -
-INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Spring WebApplicationInitializers detected on classpath: [org.springframework.boot.autoconfigure.jersey.JerseyAutoConfiguration$JerseyWebApplicationInitializer@1fe086c]
-
-
-A blog-post from Stefan Isle lead me to the solution, what was going wrong.
-In my case, there was no wrong version of Spring on the classpath.
-But my WebApplicationInitializer was not found, because I had it compiled with a version of Java, that was not available on my production system.
-
WebApplicationInitializer Not Found Because Of Wrong Java Version
-On my development box, I had compiled and tested the WAR with Java 8.
-But on my production system, running Debian 8 (Jessie), only Java 7 was available.
-And because of that, my WebApplicationInitializer
-
-After installing Java 8 from debian-backports on my production system, like described in this nice debian-upgrade note, the WebApplicationInitializer of my App was found and everything worked like a charme, again.
-
- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
mvn:spring-boot:run.
-And, unfortunatly, none of the guids you can find by google tells you, how to turn on the Auto-Configuration-Report in this case.
-Hence, I hope I can help out, with this little tip.
-
-mvn spring-boot:run
-The report is shown, if the logging for org.springframework.boot.autoconfigure.logging is set to DEBUG.
-The most simple way to do that, is to add the following line to your src/main/resources/application.properties:
-
logging.level.org.springframework.boot.autoconfigure.logging=DEBUG
-
-
-I was not able, to enable the logging via a command-line-switch.
-The seemingly obvious way to add the property to the command line with a -D like this:
-
mvn spring-boot:run -Dlogging.level.org.springframework.boot.autoconfigure.logging=DEBUG
-
--did not work for me. -If anyone could point out, how to do that in a comment to this post, I would be realy grateful! -
- - - - -- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
deep-equal()-method introduced by XPath 2.0.
-It costs me two hours at minimum to find out, what was going on.
-So I want to share this with you, in case your are wasting time on the same problem and try to find a solution via google ;)
-
-
-If you never heard of deep-equal() and just wonder how to compare XML-nodes in the right way, you should probably read this exelent article about equality in XSLT as a starter.
-
-My problem was, that I wanted to parse/output a node only, if there exists no node on the ancestor-axis, that has a exact duplicate of that node as a direct child.
-
= And With deep-equal()
-If you just use simple equality (with = or eq), the two compared nodes are converted into strings implicitly.
-That is no problem, if you are comparing attributes, or nodes, that only contain text.
-But in all other cases, you will only compare the text-contents of the two nodes and their children.
-Hence, if they differ only in an attribute, your test will report that they are equal, which might not be what you are expecting.
-
-For example, the XPath-expression -
-//child/ref[ancestor::parent/ref=.]
-
- will match the <ref>-node with @id='bar', that is nested insiede the <child>-node in this example-XML, what I was not expecting:
-
<root>
- <parent>
- <ref id="foo"><content>Same Text-Content</content></ref>
- <child>
- <ref id="bar"><content>Same Text-Content</content></ref>
- </child>
- <parent>
-<list>
-
-So, what I tried, after I found out about deep-equal() was the following Xpath-expression, which solves the problem in the above example:
-
//child/ref[deep-equal(ancestor::parent/ref,.)]
-deep-equal()
-But, moving on I stumbled accross cases, where I was expecting a match, but deep-equal() does not match the nodes.
-For example:
-
<root>
- <parent>
- <ref id="same">
- <content>Same Text-Content</content>
- </ref>
- <child>
- <ref id="same">
- <content>Same Text-Content</content>
- </ref>
- </child>
- <parent>
-<list>
--You probably catch the diffrenece at first glance, since I laid out the examples accordingly and gave you a hint in the heading of this post - but it really took me a long time to get that: -
-
-deep-equal() compares all child-nodes and only yields a match, if the compared nodes have exactly the same child-nodes.
-But in the second example, the compared <ref>-nodes contain whitespace befor and after their child-node <content>.
-And these whitespace are in fact implicite child-nodes of type text.
-Hence, the two nodes in the second example differe, because the indentation on the second one has two more spaces.
-
-Unfortunatly, I do not really know a good solution. -(If you come up with one, feel free to note or link it in the comments!) -
-
-The best solution would be an option additional argument for deep-equal(), that can be selected to tell the function to ignore such whitespace.
-In fact, some XSLT-parsers do provide such an argument.
-
-The only other solution, I can think of, is, to write another XSLT-script to remove all the whitespaces between tags to circumvent this at the first glance unexpected behaviour of deep-equal()
-
- This article was published in the course of a - resarch-project, - that is funded by the European Union and the federal state Northrhine-Wetphalia. -
-
-
-
-
-
-
-Copy and paste to execute the two steps on Linux: -
-curl -sc - https://juplo.de/wp-uploads/zookeeper+tls.tgz | tar -xzv && cd zookeeper+tls && ./README.sh
-
-A german translation of this article can be found on http://trion.de.
-
--Up until now (Version 2.3.0 of Apache Kafka) it is not possible, to encrypt the communication between the Kafka-Brokers and their ZooKeeper-ensemble. -This is not possiible, because ZooKeeper 3.4.13, which is shipped with Apache Kafka 2.3.0, lacks support for TLS-encryption. -
--The documentation deemphasizes this, with the observation, that usually only non-sensitive data (configuration-data and status information) is stored in ZooKeeper and that it would not matter, if this data is world-readable, as long as it can be protected against manipulation, which can be done through proper authentication and ACL's for zNodes: -
-The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of znodes can cause cluster disruption. (Kafka-Documentation)-
-This quote obfuscates the elsewhere mentioned fact, that there are use-cases that store sensible data in ZooKeeper. -For example, if authentication via SASL/SCRAM or Delegation Tokens is used. -Accordingly, the documentation often stresses, that usually there is no need to make ZooKeeper accessible to normal clients. -Nowadays, only admin-tools need direct access to the ZooKeeper-ensemble. -Hence, it is stated as a best practice, to make the ensemble only available on a local network, hidden behind a firewall or such. -
--In cleartext: One must not run a Kafka-Cluster, that spans more than one data-center — or at least make sure, that all communication is tunneled through a virtual private network. -
--On may the 20th 2019, version 3.5.5 of ZooKeeper has been released. -Version 3.5.5 is the first stable release of the 3.5.x branch, that introduces the support for TLS-encryption, the community has yearned for so long. -It supports the encryption of all communication between the nodes of a ZooKeeper-ensemble and between ZooKeeper-Servers and -Clients. -
--Part of ZooKeeper is a sophisticated client-API, that provide a convenient abstraction for the communication between clients and servers over the Atomic Broadcast Protocol. -The TLS-encryption is applied by this API transparently. -Because of that, all client-implementations can profit from this new feature through a simple library-upgrade from 3.4.13 to 3.5.5. - -This article will walk you through an example, that shows how to carry out such a library-upgrade for Apache Kafka 2.3.0 and configure a cluster to use TLS-encryption, when communicating with a standalone ZooKeeper. - -
--The presented setup is ment for evaluation only! -
-
-It fiddles with the libraries, used by Kafka, which might cause unforseen issues.
-Furthermore, using TLS-encryption in ZooKeeper requires one to switch from the battle-tested NIOServerCnxnFactory, which uses the NIO-API directly, to the newly introduced NettyServerCnxnFactory, which is build on top of Netty.
-
-The article will walk you step by step through the setup now. -If you just want to evaluate the example, you can jump to the download-links. -
--All commands must be executed in the same directory. -We recommend, to create a new directory for that purpose. -
--First of all: Download version 2.3.0 of Apache Kafka and version 3.5.5 of Apache ZooKeeper: -
-curl -sc - http://ftp.fau.de/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz | tar -xzv
-curl -sc - http://ftp.fau.de/apache/kafka/2.3.0/kafka_2.12-2.3.0.tgz | tar -xzv
-
-Remove the 3.4.13-version from the libs-directory of Apache Kafka:
-
rm -v kafka_2.12-2.3.0/libs/zookeeper-3.4.14.jar
-
-Then copy the JAR's of the new version of Apache ZooKeeper into that directory. (The last JAR is only needed for CLI-clients, like for example zookeeper-shell.sh.)
cp -av apache-zookeeper-3.5.5-bin/lib/zookeeper-3.5.5.jar kafka_2.12-2.3.0/libs/
-cp -av apache-zookeeper-3.5.5-bin/lib/zookeeper-jute-3.5.5.jar kafka_2.12-2.3.0/libs/
-cp -av apache-zookeeper-3.5.5-bin/lib/netty-all-4.1.29.Final.jar kafka_2.12-2.3.0/libs/
-cp -av apache-zookeeper-3.5.5-bin/lib/commons-cli-1.2.jar kafka_2.12-2.3.0/libs/
-
--That is all there is to do to upgrade ZooKeeper. -If you run one of the Kafka-commands, it will use ZooKeeper 3.5.5. from now on. -
-- -You can read more about setting up a private CA in this post... - -
--Create the root-certificate for the CA and store it in a Java-truststore: -
-openssl req -new -x509 -days 365 -keyout ca-key -out ca-cert -subj "/C=DE/ST=NRW/L=MS/O=juplo/OU=kafka/CN=Root-CA" -passout pass:superconfidential
-keytool -keystore truststore.jks -storepass confidential -import -alias ca-root -file ca-cert -noprompt
-
-
-The following commands will create a self-signed certificate in zookeeper.jks.
-What happens is:
-
zookeeperlocalhostzookeeper.jkszookeeper into the keystore zookeeper.jks- -You can read more about creating self-signed certificates with multiple domains and building a Chain-of-Trust here... - -
-NAME=zookeeper
-keytool -keystore $NAME.jks -storepass confidential -alias $NAME -validity 365 -genkey -keypass confidential -dname "CN=$NAME,OU=kafka,O=juplo,L=MS,ST=NRW,C=DE"
-keytool -keystore $NAME.jks -storepass confidential -alias $NAME -certreq -file cert-file
-openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out $NAME.pem -days 365 -CAcreateserial -passin pass:superconfidential -extensions SAN -extfile <(printf "\n[SAN]\nsubjectAltName=DNS:$NAME,DNS:localhost")
-keytool -keystore $NAME.jks -storepass confidential -import -alias ca-root -file ca-cert -noprompt
-keytool -keystore $NAME.jks -storepass confidential -import -alias $NAME -file $NAME.pem
-
--Repeat this with: -
-NAME=kafka-1NAME=kafka-2NAME=client-Now we have signed certificates for all participants in our small example, that are stored in separate keystores, each with a Chain-of-Trust set up, that is rooting in our private CA. -We also have a truststore, that will validate all these certificates, because it contains the root-certificate of the Chain-of-Trust: the certificate of our private CA. -
--We hightlight/explain only the configuration-options here, that are needed for TLS-encryption! -
--In our setup, the standalone ZooKeeper essentially needs two specially tweaked configuration files, to use encryption. -
-Create the file java.env:
SERVER_JVMFLAGS="-Xms512m -Xmx512m -Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory"
-ZOO_LOG_DIR=.
-
-zookeeper.serverCnxnFactory switches the connection-factory to use the Netty-Framework.Create the file zoo.cfg:
dataDir=/tmp/zookeeper
-secureClientPort=2182
-maxClientCnxns=0
-authProvider.1=org.apache.zookeeper.server.auth.X509AuthenticationProvider
-ssl.keyStore.location=zookeeper.jks
-ssl.keyStore.password=confidential
-ssl.trustStore.location=truststore.jks
-ssl.trustStore.password=confidential
-
-secureClientPort: We only allow encrypted connections!clientPort additionally.)authProvider.1: Selects authentification through client certificatesssl.keyStore.*: Specifies the path to and password of the keystore, with the zookeeper-certificatessl.trustStore.*: Specifies the path to and password of the common truststore with the root-certificate of our private CACopy the file log4j.properties into the current working directory, to enable logging for ZooKeeper (see also java.env):
cp -av apache-zookeeper-3.5.5-bin/conf/log4j.properties .
-
-Start the ZooKeeper-Server:
-apache-zookeeper-3.5.5-bin/bin/zkServer.sh --config . start
-
---config .: The script should search in the current directory for the configration data and certificates.- -We hightlight/explain only the configuration-options and start-parameters here, that are needed to encrypt the communication between the Kafka-Brokers and the ZooKeeper-Server! - -
--The other parameters shown here, that are concerned with SSL are only needed for securing the communication between the Brokers itself and between Brokers and Clients. -You can read all about them in the standard documentation. -In short: This example is set up, to use SSL for authentication between the brokers and SASL/PLAIN for client-authentification — both channels are encrypted with TLS. - -
--TLS for the ZooKeeper Client-API is configured through Java-Environmentvariables. -Hence, most of the SSL-configuration for connecting to ZooKeeper has to be specified, when starting the broker. -Only the address and port for the connction itself is specified in the configuration-file. -
-Create the file kafka-1.properties:
broker.id=1
-zookeeper.connect=zookeeper:2182
-listeners=SSL://kafka-1:9193,SASL_SSL://kafka-1:9194
-security.inter.broker.protocol=SSL
-ssl.client.auth=required
-ssl.keystore.location=kafka-1.jks
-ssl.keystore.password=confidential
-ssl.key.password=confidential
-ssl.truststore.location=truststore.jks
-ssl.truststore.password=confidential
-listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required user_consumer="pw4consumer" user_producer="pw4producer";
-sasl.enabled.mechanisms=PLAIN
-log.dirs=/tmp/kafka-1-logs
-offsets.topic.replication.factor=2
-transaction.state.log.replication.factor=2
-transaction.state.log.min.isr=2
-
-zookeeper.connect: If you allow unsecure connections too, be sure to specify the right port here!
-Start the broker in the background and remember its PID in the file KAFKA-1:
(
- export KAFKA_OPTS="
- -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
- -Dzookeeper.client.secure=true
- -Dzookeeper.ssl.keyStore.location=kafka-1.jks
- -Dzookeeper.ssl.keyStore.password=confidential
- -Dzookeeper.ssl.trustStore.location=truststore.jks
- -Dzookeeper.ssl.trustStore.password=confidential
- "
- kafka_2.12-2.3.0/bin/kafka-server-start.sh kafka-1.properties & echo $! > KAFKA-1
-) > kafka-1.log &
-
-Check the logfile kafka-1.log to confirm that the broker starts without errors!
zookeeper.clientCnxnSocket: Switches from NIO to the Netty-Framework.zookeeper.client.secure=true: Switches on TLS-encryption, for all connections to any ZooKeeper-Serverzookeeper.ssl.keyStore.*: Specifies the path to and password of the keystore, with the kafka-1-certificatezookeeper.ssl.trustStore.*: Specifies the path to and password of the common truststore with the root-certificate of our private CA
-
-Do the same for kafka-2!
-And do not forget, to adapt the config-file accordingly — or better: just download a copy...
-
-
-All scripts from the Apache-Kafka-Distribution that connect to ZooKeeper are configured in the same way as seen for kafka-server-start.sh.
-For example, to create a topic, you will run:
-
export KAFKA_OPTS="
- -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
- -Dzookeeper.client.secure=true
- -Dzookeeper.ssl.keyStore.location=client.jks
- -Dzookeeper.ssl.keyStore.password=confidential
- -Dzookeeper.ssl.trustStore.location=truststore.jks
- -Dzookeeper.ssl.trustStore.password=confidential
-"
-kafka_2.12-2.3.0/bin/kafka-topics.sh \
- --zookeeper zookeeper:2182 \
- --create --topic test \
- --partitions 1 --replication-factor 2
-
-Note: A different keystore is used here (client.jks)!
-CLI-clients, that connect to the brokers, can be called as usual. -
-
-In this example, they use an encrypted listener on port 9194 (for kafka-1) and are authenticated using SASL/PLAIN.
-The client-configuration is kept in the files consumer.config and producer.config.
-Take a look at that files and compare them with the broker-configuration above.
-If you want to lern more about securing broker/client-communication, we refere you to the official documentation.
-
- -If you have trouble to start these clients, download the scripts and take a look at the examples in README.sh - -
--This recipe only activates TLS-encryption between Kafka-Brokers and a Standalone ZooKeeper. -It does not show, how to enable TLS between ZooKeeper-Nodes (which should be easy) or if it is possible to authenticate Kafka-Brokers via TLS-certificates. These topics will be covered in future articles... -
--Download and unpack zookeeper+tls.tgz for an evaluation of the presented setup: -
-curl -sc - https://juplo.de/wp-uploads/zookeeper+tls.tgz | tar -xzv
-
--The archive contains a fully automated example. -Just run README.sh in the unpacked directory. -
--It downloads the required software, carries out the library-upgrade, creates the required certificates and starts a standalone ZooKeeper and two Kafka-Brokers, that use TLS to encrypt all communication. -It also executes a console-consumer and a console-producer, that read and write to a topic, and a zookeeper-shell, that communicates directly with the ZooKeeper-node, to proof, that the setup is working. -The ZooKeeper and the Brokers-instances are left running, to enable the evaluation of the fully encrypted cluster. -
-README.sh, to execute the automated exampleREADME.sh, the Kafka-Cluster will be still running, so that one can experiment with commands from README.sh by handREADME.sh can be executed repeatedly: it will skip all setup-steps, that are already done automaticallyREADME.sh stop, to stop the Kafka-Cluster (it can be restarted by re-running README.sh)README.sh cleanup, to stop the Cluster and remove all created files and data (only the downloaded packages will be left untouched)-The SAN-extension is removed during signing, if not respecified explicitly. -To create a private CA with self-signed multi-domain certificats for your development setup, you simply have to: -
--Multi-Domain certificates are implemented as a certificate-extension called Subject Alternative Name (SAN). -One can simply specify the additional domains (or IP's) when creating a certificate. -
-
-The following example shows the syntax for the keytool-command, that comes with the JDK and is frequently used by Java-programmers to create certificates:
-
keytool \
- -keystore test.jks -storepass confidential -keypass confidential \
- -genkey -alias test -validity 365 \
- -dname "CN=test,OU=security,O=juplo,L=Juist,ST=Niedersachsen,C=DE" \
- -ext "SAN=DNS:test,DNS:localhost,IP:127.0.0.1"
-
-
-If you list the content of the newly created keystore with...
-keytool -list -v -keystore test.jks
-
-...you should see a section like the following one:
-#1: ObjectId: 2.5.29.17 Criticality=false
-SubjectAlternativeName [
- DNSName: test
- DNSName: localhost
- IPAddress: 127.0.0.1
-]
-
--The certificate is also valid for this additionally specified domains and IP's. -
--The problem is, that it is not signed and will not be trusted, unless you publicize it explicitly through a truststore. -This is feasible, if you just want to authenticate and encrypt one point-2-point communication. -But if more clients and/or servers have to be authenticated to each other, updating and distributing the truststore will soon become hell. -
--The common solution in this situation is, to create a private CA, that can sign newly created certificates. -This way, only the root-certificate of that private CA has to be distributed. -Clients, that know the root-certificate of the private CA will automatically trust all certificates, that are signed by that CA. -
--But unfortunatly, if you sign your certificate, the SAN-extension vanishes: the signed certificate is only valid for the CN. -(One may think, that you just have to specify the export of the SAN-extension into the certificate-signing-request - which is not exported by default - but the SAN will still be lost after signing the extended request...) -
-This removal of the SAN-extension is not a bug, but a feature. -A CA has to be in control, which domains and IP's it signes certificates for. -If a client could write arbitrary additional domains in the SAN-extension of his certificate-signing-request, he could fool the CA into signing a certificate for any domain. -Hence, all entries in a SAN-extension are removed by default during signing. -
--This default behavior is very annoying, if you just want to run your own private CA, to authenticate all your services to each other. -
--In the following sections, I will walk you through a solution to circumvent this pitfall. -If you just need a working solution for your development setup, you may skip the explanation and just download the scripts, that combine the presented steps. -
-
-We are using openssl to create the root-certificate of our private CA:
-
openssl req \
- -new -x509 -subj "/C=DE/ST=Niedersachsen/L=Juist/O=juplo/OU=security/CN=Root-CA" \
- -keyout ca-key -out ca-cert -days 365 -passout pass:extraconfidential
-
--This should create two files: -
-ca-cert, the root-certificate of your CAca-key, the private key of your CA with the password extraconfidential
-Be sure to protect ca-key and its password, because anyone who has access to both of them, can sign certificates in the name of your CA!
-
-To distribute the root-certificate, so that your Java-clients can trust all certificates, that are signed by your CA, you have to import the root-certificate into a truststore and make that truststore available to your Java-clients: -
-keytool \
- -keystore truststore.jks -storepass confidential \
- -import -alias ca-root -file ca-cert -noprompt
-
--We are reusing the already created certificate here. -If you create a new one, there is no need to specify the SAN-extension, since it will not be exported into the request and this version of the certificate will be overwritten, when the signed certificate is reimported: -
-keytool \
- -keystore test.jks -storepass confidential \
- -certreq -alias test -file cert-file
-
-
-This will create the file cert-file, which contains the certificate-signing-request.
-This file can be deleted, after the certificate is signed (which is done in the next step).
-
-We use openssl x509 to sign the request:
-
openssl x509 \
- -req -CA ca-cert -CAkey ca-key -in cert-file -out test.pem \
- -days 365 -CAcreateserial -passin pass:extraconfidential \
- -extensions SAN -extfile <(printf "\n[SAN]\nsubjectAltName=DNS:test,DNS:localhost,IP:127.0.0.1")
-
-
-This can also be done with openssl ca, which has a slightly different and little bit more complicated API.
-openssl ca is ment to manage a real full-blown CA.
-But we do not need the extra options and complexity for our simple private CA.
-
-The important part here is all that comes after -extensions SAN.
-It specifies the Subject-Alternative-Name-section, that we want to include additionally into the signed certificate.
-Because we are in full control of our private CA, we can specify any domains and/or IP's here, that we want.
-The other options are ordinary certificate-signing-stuff, that is already better explained elswhere.
-
-We use a special syntax with the option -extfile, that allows us to specify the contents of a virtual file as part of the command.
-You can as well write your SAN-extension into a file and hand over the name of that file here, as it is done usually.
-If you want to specify the same SAN-extension in a file, that file would have to contain:
-
[SAN]
-subjectAltName=DNS:test,DNS:localhost,IP:127.0.0.1
-
-
-Note, that the name that you give the extension on the command-line with -extension SAN has to match the header in the (virtual) file ([SAN]).
-
-As a result of the command, the file test.pem will be created, which contains the signed x509-certificate.
-You can disply the contents of that certificate in a human readable form with:
-
openssl x509 -in test.pem -text
-
-It should display something similar to this example-output
- -ca-cert) and your signed certificate (test.pem) into a keystore and make that keystore available to the Java-service, in order to enable it to authentificate itself using the signed certificate, when a client connects.
-
-Import the root-certificate of the CA:
-keytool \
- -keystore test.jks -storepass confidential \
- -import -alias ca-root -file ca-cert -noprompt
-
-Import the signed certificate (this will overwrite the unsigned version):
-keytool \
- -keystore test.jks -storepass confidential \
- -import -alias test -file test.pem
-
--That's it: we are done! -
--You can validate the contents of the created keystore with: -
-keytool \
- -keystore test.jks -storepass confidential \
- -list -v
-
-It should display something similar to this example-output
--To authenticate service A against client B you will have to: -
-test.jks available to the service Atruststore.jks available to the client B-If you want, that your clients also authentificate themselfs to your services, so that only clients with a trusted certificate can connect (2-Way-Authentication), client B also needs its own signed certificate to authenticate against service A and service A also needs access to the truststore, to be able to trust that certificate. -
--The following two scripts automate the presented steps and may be useful, when setting up a private CA for Java-development: -
ca-cert and ca-key and the truststore truststore.p12)CN.pem and the keystore CN.p12)Read the source for more options...
-
-Differing from the steps shown above, these scripts use the keystore-format PKCS12.
-This is, because otherwise, keytool is nagging about the non-standard default-format JKS in each and every step.
-
-Note: PKCS12 does not distinguish between a store-password and a key-password. Hence, only a store-passwort is specified in the scripts. -
-]]>
-In Spring Boot 2.2.x, you have to instanciate a @Bean of type InMemoryHttpTraceRepository to enable the HTTP Trace Actuator.
-
Jump to the explanation of and example code for the fix
-Enabling HTTP Trace â Before 2.2.x...-Spring Boot comes with a very handy feature called Actuator. -Actuator provides a build-in production-ready REST-API, that can be used to monitor / menage / debug your bootified App. -To enable it — prior to 2.2.x —, one only had to: -
-<dependency>
- <groupId>org.springframework.boot
- <artifactId>spring-boot-starter-actuator
-</dependency>
-
-
- management.endpoints.web.exposure.include=*
-
- -But... -
-httptrace-actuator to Spring Boot 2.2.x, orhttptrace-actuator as described in the documentation...it simply does not work at all!
-
-The simple fix for this problem is, to add a @Bean of type InMemoryHttpTraceRepository to your @Configuration-class:
-
@Bean
-public HttpTraceRepository htttpTraceRepository()
-{
- return new InMemoryHttpTraceRepository();
-}
-
--The cause of this problem is not a bug, but a legitimate change in the default configuration. -Unfortunately, this change is not noted in the according section of the documentation. -Instead it is burried in the Upgrade Notes for Spring Boot 2.2 -
--The default-implementation stores the captured data in memory. -Hence, it consumes much memory, without the user knowing, or even worse: needing it. -This is especially undesirable in cluster environments, where memory is a precious good. -And remember: Spring Boot was invented to simplify cluster deployments! -
--That is, why this feature is now turned of by default and has to be turned on by the user explicitly, if needed. -
-]]>git diff BRANCH:PATH OTHER_BRANCH:OTHER_PATH
-
-git diff branch_a:file_a.txt branch_b:file_b.txt
-
-git diff HEAD:file.txt a09127a:file.txt
-
-git diff HEAD:file.txt branchname:file.txt
-
-git diff :file.txt branchname:file.txt
-
-git diff :file.txt HEAD~4:file.txt
-
-git diff :file.txt HEAD~4:file.txt
-
-git diff :file.txt remotes/origin/master:file.txt
-
-foo-branch of the bar-repository:
-git diff :file.txt remotes/bar/foo~4:file.txt
-
-If the path (aka object name) contains a colon (:), git interprets the part before the colon as a commit and the part after it as the path in the tree, denominated by the commit. (For more details refere to this post with tips for git show)
git show BRANCH:PATH
-
-file.txt in commit a09127:
-git show a09127a:file.txt
-
-The commit can be specified with any valid denominator and may belong to any local- or remote-branch...
-git show HEAD^^^^:file.txt
-
-git show HEAD~4:file.txt
-
-git show remotes/origin/master~4:file.txt
-
-foo in repository bar:
-git show remotes/bar/foo~4:file.txt
-
-git show a09127a:file.txt | wc -l
-
-git show HEAD~4:file.txt > file.txt
-
-If the path (aka object name) contains a colon (:), git interprets the part before the colon as a commit and the part after it as the path in the tree, denominated by the commit.
./file. -I will also give some advice for those of you, who are new to Docker - but just enough to enable you to follow. -
--This is part 2 of this series, that shows how to run a Spring-Boot OAuth2 App behind a gateway -- Part 1 is linked above. -
- - -
-We will simulate a production-setup by adding the domain, that will be used in production - example.com in our case -, as an alias for localhost.
-
-Additionally, we will start an NGINX as reverse-proxy alongside our app and put both containers into a virtual network. -This simulates a real-world secenario, where your app will be running behinde a gateway together with a bunch of other apps and will have to deal with forwarded requests. -
--Together, this enables you to test the production-setup of your oauth2-provider against a locally running development environment, including the configuration of the finally used URIs and nasty forwarding-errors. -
-To reach this goal we will have to:
--By the way: -Any other server, that can act as reverse proxy, or some real gateway,like Zuul would work as well, but we stick with good old NGINX, to keep it simple. -
- - -
-In our example we are using GitHub as oauth2-provider and example.com as the domain, where the app should be found after the release.
-So, we will have to change the Authorization callback URL to
-http://example.de/login/oauth2/code/github
-

-O.k., that's done. -
-
-But we haven't released yet and nothing can be found on the reals server, that hosts example.com...
-But still, we really would like to test that production-setup to be sure that we configured all bits and pieces correctly!
-
-
-In order to tackle this chicken-egg-problem, we will fool our locally running browser to belive, that example.com is our local development system.
-
-
example.com
-On Linux/Unix this can be simply done by editing /etc/hosts.
-You just have to add the domain (example.com) at the end of the line that starts with 127.0.0.1:
-
127.0.0.1 localhost example.com
-
-
-Locally running programms - like your browser - will now resolve example.com as 127.0.0.1
-
Next, we have to create a virtual network, where we can put in both containers:
-docker network create juplo
-
--Yes, with Docker it is as simple as that. -
--Docker networks also come with some extra goodies. -Especially one, which is extremly handy for our use-case is: They are enabling automatic name-resolving for the connected containers. -Because of that, we do not need to know the IP-addresses of the participating containers, if we give each connected container a name. -
- - --We are using Docker here on purpose. -Using Kubernetes just to test / experiment on a DevOp-box would be overkill. -Using Docker-Compose might be an option. -But we want to keep it as simple as possible for now, hence we stick with Docker. -Also, we are just experimenting here. -
-- -You might want to switch to Docker-Compose later. -Especially, if you plan to set up an environment, that you will frequently reuse for manual tests or such. - -
- - -
-To move our app into the virtual network, we have to start it again with the additional parameter --network.
-We also want to give it a name this time, by using --name, to be able to contact it by name.
-
-
-You have to stop and remove the old container from part 1 of this HowTo-series with CTRL-C beforehand, if it is still running - Removing is done automatically, because we specified --rm
-:
-
docker run \
- -d \
- --name app \
- --rm \
- --network juplo \
- juplo/social-logout:0.0.1 \
- --server.use-forward-headers=true \
- --spring.security.oauth2.client.registration.github.client-id=YOUR_GITHUB_ID \
- --spring.security.oauth2.client.registration.github.client-secret=YOUR_GITHUB_SECRET
-
--Summary of the changes in comparison to the statement used in part 1: -
--d to run the container in the background - See tips below...
---server.use-forward-headers=true, which is needed, because our app is running behind a gateway now - I will explain this in more detail later
---network juplo,
-which is necessary to put the app in our virtual network juplo, and --name app, which is necessary to enable DNS-resolving.
-- - -
CTRL-C will stop (and in our case remove) the container again.
--d (for daemonize) to start the container in the background.
-
-docker logs -f NAME (safely disruptable with CTRL-C) and stop (and in our case remove) the container with docker stop NAME.
-docker ps is your friend.
--Next, we will start NGINX alongside our app and configure it as reverse-proxy: -
-proxy.conf with the following content:
-upstream upstream_a {
- server app:8080;
-}
-
-server {
- listen 80;
- server_name example.com;
-
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header Host $host;
- proxy_set_header X-Forwarded-Host $host;
- proxy_set_header X-Forwarded-Port $server_port;
-
- location / {
- proxy_pass http://upstream_a;
- }
-}
-
-example.com (server_name) on port 80.location-directive we tell this server, that all requests shall be handled by the upstream-server upstream_a.upstream-block at the beginning of the configuration-file to be a forward to app:8080app is simply the name of the container, that is running our oauth2-app - Rembember: the name is resolvable via DNS8080 is the port, our app listens on in that container.proxy_set_header-directives are needed by Spring-Boot Security, for dealing correctly with the circumstance, that it is running behind a reverse-proxy.proxy_set_header-directives in more detail.
-80 to localhost:
-docker run \
- --name proxy \
- --rm \
- --network juplo -p 80:80 \
- --volume $(pwd)/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro \
- nginx:1.17
-
-This command has to be executed in the direcotry, where you have created the file proxy.conf.
80 on localhost, since the docker-daemon runs with root-privileges and hence, can use this privileged port - if you do not have another webserver running locally there.$(pwd) resolves to your current working-directory - This is the most convenient way to produce the absolute path to proxy.conf, that is required by --volume to work correclty.-If you have reproduced the receipt exacly, your app should be up and running now. -That is: -
-example.com to point at localhost you should now be able to open your app as http://example.com in a locally running browser-In this simulated production-setup a lot of stuff can go wrong! -You may face nearly any problem from configuration-mismatches considering the redirect-URIs to nasty and hidden redirect-issues due to forwarded requests. -
-- -Do not mutter at me... -Remember: That was the reason, we set up this simulated production-setup in the first place! - -
--In the next part of this series I will explain some of the most common problems in a production-setup with forwarded requests. -I will also show, how you can debug the oauth2-flow in your simulated production-setup, to discover and solve these problems -
-]]>
-Developing Your first OAuth2-App on localhost with OAuth2 Boot may be easy, ...
-
-...but what about running it in real life? -
-
--This is the first post of a series of Mini-Howtos, that gather some help, to get you started, when switching from localhost to production with SSL and a reverse-proxy (aka gateway) in front of your app, that forwards the requests to your app that listens on a different name/IP, port and protocol. -
--I will also give some advice for those of you, who are new to Docker - but just enough to enable you to follow. -
-This is Part 1 of this series, that shows how to package a Spring-Boot-App as Docker-Image and run it as a container -
tut-spring-boot-oauth2/logout-As an example for a simple app, that uses OAuth2 for authentication, we will use the third step of the Spring-Boot OAuth2-Tutorial. -
--You should work through that tutorial up until that step - called logout -, if you have not done yet. -This will guide you through programming and setting up a simple app, that uses the GitHub-API to authenticate its users. -
--Especially, it explains, how to create and set up a OAuth2-App on GitHub - Do not miss out on that part: You need your own app-ID and -secret and a correctly configured redirect URI. -
--You should be able to build the app as JAR and start that with the ID/secret of your GitHub-App without changing code or configuration-files as follows: -
-mvn package
-java -jar target/social-logout-0.0.1-SNAPSHOT.jar \
- --spring.security.oauth2.client.registration.github.client-id=YOUR_GITHUB_APP_ID
- --spring.security.oauth2.client.registration.github.client-secret=YOUR_GITHUB_APP_SECRET
-
-
-If the app is running corectly, you should be able to Login/Logout via http://localhost:8080/
-
-The folks at Spring-Boot are keeping the guide and this repository up-to-date pretty well. -At the date of the writing of this article it is up to date with version 2.2.2.RELEASE of Spring-Boot. -
-You may as well use any other OAuth2-application here. For example your own POC, if you already have build one that works while running on localhost
I will only explain the protocol in very short words here, so that you can understand what goes wrong in case you stumble across one of the many pitfalls, when setting up oauth2. -You can read more about oauth2 elswhere
--For authentication, oauth2 redirects the browser of your user to a server of your oauth2-provider. -This server authenticates the user and redirects the browser back to your server, providing additionally information and ressources, that lets your server know that the user was authenticated successfully and enables it to request more information in the name of the user. -
--Hence, when configuring oath2 one have to: -
--There are a lot more things, which can be configured in oauth2, because the protocol is designed to fit a wide range of use-cases. -But in our case, it usually boils down to the parameters mentioned above. -
-
-Considering our combination of spring-security-oauth2 with GitHub this means:
-
-Again, everything can be manually overriden, if needed.
-Configuration-keys starting with spring.security.oauth2.client.registration.github are choosing GitHub as the oauth2-provider and trigger a bunch of predifined default-configuration.
-If you have set up your own oauth2-provider, you have to configure everything manually.
-
-To faciliate the debugging - and because this most probably will be the way you are deploying your app anyway - we will start by building a docker-image from the app -
-
-For this, you do not have to change a single character in the example project - all adjustments to the configuration will be done, when the image is started as a container.
-Just change to the subdirectory logout of the checked out project and create the following Dockerfile there:
-
FROM openjdk:8-jre-buster
-
-COPY target/social-logout-0.0.1-SNAPSHOT.jar /opt/app.jar
-EXPOSE 8080
-ENTRYPOINT [ "/usr/local/openjdk-8/bin/java", "-jar", "/opt/app.jar" ]
-CMD []
-
--This defines a docker-image, that will run the app. -
-openjdk:8-jre-buster, which is an installation of the latest OpenJDK-JDK8 on a Debian-Buster8080CMD [] overwrites the default from the parent-image with an empty list - this enables us to pass command-line parameters to our spring-boot app which we will need to pass in our configuration-You can build and tag this image with the following commands: -
-mvn clean package
-docker build -t juplo/social-logout:0.0.1 .
-
-This will tag your image as juplo/social-logout:0.0.1 - you obviously will/should use your own tag here, for example: myfancytag
Do not miss out on the flyspeck (.) at the end of the last line!
You can run this new image with the follwing command - and you should do that, to test that everything works as expected:
-docker run \
- --rm \
- -p 8080:8080 \
- juplo/social-logout:0.0.1 \
- --spring.security.oauth2.client.registration.github.client-id=YOUR_GITHUB_ID \
- --spring.security.oauth2.client.registration.github.client-secret=YOUR_GITHUB_SECRET
-
---rm removes this test-container automatically, once it is stopped again-p 8080:8080 redirects port 8080 on localhost to the app
-Everything after the specification of the image (here: juplo/social-logout:0.0.1) is handed as a command-line parameter to the started Spring-Boot app - That is, why we needed to declare CMD [] in our Dockerfile
-
- We utilize this here to pass the ID and secret of your GitHub-app into the docker container -- just like when we started the JAR directly -
- --The app should behave exactly the same now lik in the test above, where we started it directly by calling the JAR. -
-
-That means, that you should still be able to login into and logout of your app, if you browse to http://localhost:8080 --
-At least, if you correctly configured http://localhost:8080/login/oauth2/code/github as authorization callback URL in the settings of your OAuth App on GitHub.
-
-In the next part of this series, we will hide the app behind a proxy and simulate that the setup is running on our real server example.com.
-
-ObjectMapper mapper = new ObjectMapper();
-mapper.setVisibility(PropertyAccessor.FIELD, JsonAutoDetect.Visibility.ANY);
-mapper.enable(SerializationFeature.INDENT_OUTPUT);
-String str = mapper.writeValueAsString(new Bar());
-
--I have put together a tiny sample-project, that demonstrates the approach. -URL for cloning with GIT: -https://juplo.de/git/demos/noanno/ -
-It can be executed with mvn spring-boot:run
-Spring offers the annotation @ExceptionHandler to handle exceptions thrown by controllers.
-The annotation can be added to methods of a specific controller, or to methods of a @Component-class, that is itself annotated with @ControllerAdvice.
-The latter defines global exception-handling, that will be carried out by the DispaterServlet for all controllers.
-The former specifies exception-handlers for a single controller-class.
-
-This mechanism is documented in the Springframework Documentation and it is neatly summarized in the blog-article -Exception Handling in Spring MVC. -In this article, we will focus on testing the sepcified exception-handlers. -
-@WebMvcTest-Slice
-Spring-Boot offers the annotation @WebMvcTest for tests of the controller-layer of your application.
-For a test annotated with @WebMvcTest, Spring-Boot will:
-
@Controller, @RestController, @JsonComponent etc.)MockMVC
-
-All other beans configured in the app will be ignored.
-Hence, a @WebMvcTest fits perfectly for testing exception-handlers, which are part of the controller-layer.
-It enables us, to mock away the other layers of the application and concentrate on the part, that we want to test.
-
-Consider the following controller, that defines a request-handling and an accompanying exception-handler, for an
-IllegalArgumentException, that may by thrown in the business-logic:
-
@Controller
-public class ExampleController
-{
- @Autowired
- ExampleService service;
-
- @RequestMapping("/")
- public String controller(
- @RequestParam(required = false) Integer answer,
- Model model)
- {
- Boolean outcome = answer == null ? null : service.checkAnswer(answer);
- model.addAttribute("answer", answer);
- model.addAttribute("outcome", outcome);
- return "view";
- }
-
- @ResponseStatus(HttpStatus.BAD_REQUEST)
- @ExceptionHandler(IllegalArgumentException.class)
- public ModelAndView illegalArgumentException(IllegalArgumentException e)
- {
- LOG.error("{}: {}", HttpStatus.BAD_REQUEST, e.getMessage());
- ModelAndView mav = new ModelAndView("400");
- mav.addObject("exception", e);
- return mav;
- }
-}
-
-The exception-handler resolves the exception as 400: Bad Request and renders the specialized error-view 400.
-
-With the help of @WebMvcTest, we can easily mock away the actual implementation of the business-logic and concentrate on the code under test:
-our specialized exception-handler.
-
@WebMvcTest(ExampleController.class)
-class ExceptionHandlingApplicationTests
-{
- @MockBean ExampleService service;
- @Autowired MockMvc mvc;
-
- @Test
- @Autowired
- void test400ForExceptionInBusinessLogic() throws Exception {
- when(service.checkAnswer(anyInt())).thenThrow(new IllegalArgumentException("FOO!"));
-
- mvc
- .perform(get(URI.create("http://FOO/?answer=1234")))
- .andExpect(status().isBadRequest());
-
- verify(service, times(1)).checkAnswer(anyInt());
- }
-}
-
-We preform a GET with the help of the provided MockMvc and check, that the status of the response fullfills our expectations, if we tell our mocked business-logic to throw the IllegalArgumentException, that is resolved by our exception-handler.]]>
-Yet, the presented approach does not work for all use-cases, because it presumes, that a strictly monotonically increasing sequence numbering can be established across all messages - at least concerning all messages, that are routed to the same partition. -
--A source produces messages, with reliably unique ID's. -From time to time, sending these messages to Kafka may fail. -The order, in which these messages are send, is crucial with respect to the incedent, they belong to. -Resending the messages in correct order after a failure (or downtime) is no problem. -But some of the messages may be send twice (or more often), because the producer does not know exactly, which messages were send successful. -
-
-Incident A - { id: 1, data: "ab583cc8f8" }
-Incident B - { id: 2, data: "83ccc8f8f8" }
-Incident C - { id: 3, data: "115tab5b58" }
-Incident C - { id: 4, data: "83caac564b" }
-Incident B - { id: 5, data: "a583ccc8f8" }
-Incident A - { id: 6, data: "8f8bc8f890" }
-Incident A - { id: 7, data: "07583ab583" }
-
-<< DOWNTIME OR FAILURE >>
-
-Incident C - { id: 4, data: "83caac564b" }
-Incident B - { id: 5, data: "a583ccc8f8" }
-Incident A - { id: 6, data: "8f8bc8f890" }
-Incident A - { id: 7, data: "07583ab583" }
-Incident A - { id: 8, data: "930fce58f3" }
-Incident B - { id: 9, data: "7583ab93ab" }
-Incident C - { id: 10, data: "7583aab583" }
-Incident B - { id: 11, data: "b583075830" }
-
--Since eache message has a unique ID, all messages are inherently idempotent: -Deduplication is no problem, if the receiver keeps track of the messages, he has already seen. -
--Where is the problem?, you may ask. That's trivial, I just code the deduplication into my consumer! -
--But this approach has several drawbacks, including: -
--Wouldn't it be much nicer, if we had an efficient and bulletproof algorithm, that we can simply plug into our Kafka-pipelines? -
--In his blog-article -Jaroslaw Kijanowski describes three deduplication algorithms. -The first does not scale well, because it does only work for single-partition topics. -The third aims at a slightly different problem and might fail deduplicating some messages, if the timing is not tuned correctly. -The looks like a robust solution. -But it also looks a bit hacky and is unnecessary complex in my opinion. -
--Playing around with his ideas, i have come up with the following algorithm, that combines elements of all three solutions: -
-
-The algorithm uses the well known approach, that TCP/IP uses to detect and drop duplicate packages.
-It is efficient, since we never have to store more sequence numbers, than partitions, that we are handling.
-The algorithm can be implemented easily based on a ValueTransformer, because Kafka Streams provides the ability to store state locally.
-
-To clearify the idea, I further simplified the problem for the example implementation: -
-String, for easy scripting.
--That is, our message stream is simply a mapping from names to unique sequence numbers and we want to be able to separate out the contained sequence for a single person, without duplicate entries and without jeopardizing the order of that sequence. -
--In this simplified setup, the implementation effectively boils down to the following method-override: -
-@Override
-public Iterable<String> transform(String value)
-{
- Integer partition = context.partition();
- long sequenceNumber = Long.parseLong(value);
-
- Long seen = store.get(partition);
- if (seen == null || seen < sequenceNumber)
- {
- store.put(partition, sequenceNumber);
- return Arrays.asList(value);
- }
-
- return Collections.emptyList();
-}
-
-ProcessorContext, that is handed to our Instance in the constructor, which is not shown here for brevity.String-value of the message as long corresponds to the extraction of the sequence number from the value of the message in our simplified setup.
-We can use our ValueTransformer with flatTransformValues(),
-to let Kafka Streams drop the detected duplicate values:
-
streamsBuilder
- .stream("input")
- .flatTransformValues(
- new ValueTransformerSupplier()
- {
- @Override
- public ValueTransformer get()
- {
- return new DeduplicationTransformer();
- }
- },
- "SequenceNumbers")
- .to("output");
-
-
-One has to register an appropriate store to the StreamsBuilder under the referenced name.
-
-The full source is available on github.com -
--The presented deduplication algorithm presumes some assumptions, that may not fit your use-case. -It is crucial, that these prerequisites are not violated. -Therefor, I will spell them out once more:
--As a conclusion of this assumptions, we have to note: -We can only deduplicate messages, that are routed to the same partition. -This follows, because we can only guarantee message-order per partition. But it should not be a problem for the same reason: -We assume a use-case, where all messages concerning a specific incident are captured in the same partition. -
--Since we are only deduplicating messages, that are routed to the same partition, we do not need globally unique sequence numbers. -Our sequence numbers only have to be unique per partition, to enable us to detect, that we have seen a specific message before on that partition. -Golbally unique sequence numbers clearly are a stronger condition: -It does not hurt, if the sequence numbers are globally unique, because they are always unique per partition, if they are also globally unique. -
--We detect unseen messages, by the fact that their sequence number is greater than the last stored hight watermark for the partition, they are routed to. -Hence, we do not rely on a seamless numbering without gaps. -It does not hurt, if the series of sequence numbers does not have any gaps, as long as two different messages on the same partition never are assigned to the same sequence number. -
--That said, it should be clear, that a globally unique seamless numbering of all messages across all partitions - as in our simple example-implementation - does fit well with our approach, because the numbering is still unique, if one only considers the messages in one partition, and the gaps in the numbering, that are introduced by focusing only on the messages of a single partition, are not violating our assumptions. -
--Last but not least, I want to point out, that this approach silently assumes, that the sequence number of the message is not identically to the key of the message. -On the contrary: The sequence number is expected to be different from the key of the message! -
--If one would use the key of the message as its sequence number (provided that it is unique and represents a strictly increasing sequence of numbers), one would indeed assure, that all duplicates can be detected, but he would at once force the implementation to be indifferent, concerning the order of the messages. -
--That is, because subsequent messages are forced to have different keys, because all messages are required to have unique sequence numbers. -But messages with different keys may be routed to different partitions - and Kafka can only guarantee message ordering for messages, that live on the same partition. -Hence, one has to assume, that the order in which the messages are send is not retained, if he uses the message-keys as sequence numbers - unless, only one partition is utilized, which is contradictory to our primary goal here: enabling scalability through data-sharding. -
--This is also true, if the key of a message contains an invariant ID and only embeds the changing sequence number. -Because, the default partitioning algorithm always considers the key as a whole, and if any part of it changes, the outcome of the algorithm might change. -
--In a production-ready implementation of the presented approach, I would advice, to store the sequence number in a message header, or provide a configurable extractor, that can derive the sequence number from the contents of the value of the message. -It would be perfectly o.k., if the IDs of the messages are used as sequence numbers, as long as they are unique and monotonically increasing and are stored in the value of the message - not in / as the key! -
]]>-In this mini-HowTo I will show a way, how to instantiate multiple beans dinamically in Spring-Boot, depending on configuration-properties. -We will: -
-ApplicationContextInitializer to add the beans to the context, before it is refreshedEnvironmentPostProcessor to access the configured configuration sourcesEnvironmentPostProcessor with Spring-Boot
-Additionally Beans can be added programatically very easy with the help of an ApplicationContextInitializer:
-
@AllArgsConstructor
-public class MultipleBeansApplicationContextInitializer
- implements
- ApplicationContextInitializer
-{
- private final String[] sites;
-
- @Override
- public void initialize(ConfigurableApplicationContext context)
- {
- ConfigurableListableBeanFactory factory =
- context.getBeanFactory();
- for (String site : sites)
- {
- SiteController controller =
- new SiteController(site, "Descrition of site " + site);
- factory.registerSingleton("/" + site, controller);
- }
- }
-}
-
-
-This simplified example is configured with a list of strings that should be registered as controllers with the DispatcherServlet.
-All "sites" are insances of the same controller SiteController, which are instanciated and registered dynamically.
-
-The instances are registered as beans with the method registerSingleton(String name, Object bean)
-of a ConfigurableListableBeanFactory that can be accessed through the provided ConfigurableApplicationContext
-
-The array of strings represents the accessed configuration properties in the simplified example. -The array will most probably hold more complex data-structures in a real-world application. -
--But how do we get access to the configuration-parameters, that are injected in this array here...? -
- -
-Instantiating and registering the additionally beans is easy.
-The real problem is to access the configuration properties in the early plumbing-stage of the application-context, in that our ApplicationContextInitializer runs in:
-
-The initializer cannot be instantiated and autowired by Spring!
-
-
The Bad News: In the early stage we are running in, we cannot use autowiring or access any of the other beans that will be instantiated by spring - especially not any of the beans, that are instantiated via @ConfigurationProperties, we are intrested in.
-
The Good News: We will present a way, how to access initialized instances of all property sources, that will be presented to your app
- -
-If you write an EnvironmentPostProcessor, you will get access to an instance of ConfigurableEnvironment, that contains a complete list of all PropertySource's, that are configured for your Spring-Boot-App.
-
public class MultipleBeansEnvironmentPostProcessor
- implements
- EnvironmentPostProcessor
-{
- @Override
- public void postProcessEnvironment(
- ConfigurableEnvironment environment,
- SpringApplication application)
- {
- String sites =
- environment.getRequiredProperty("juplo.sites", String.class);
-
- application.addInitializers(
- new MultipleBeansApplicationContextInitializer(
- Arrays
- .stream(sites.split(","))
- .map(site -> site.trim())
- .toArray(size -> new String[size])));
- }
-}
-
--The Bad News: -Unfortunately, you have to scan all property-sources for the parameters, that you are interested in. -Also, all values are represented as stings in this early startup-phase of the application-context, because Spring's convenient conversion mechanisms are not available yet. -So, you have to convert any values by yourself and stuff them in more complex data-structures as needed. -
-
-The Good News:
-The property names are consistently represented in standard Java-Properties-Notation, regardless of the actual type (.properties / .yml) of the property source.
-
-Finally, you have to register the EnvironmentPostProcessor with your Spring-Boot-App.
-This is done in the META-INF/spring.factories:
-
org.springframework.boot.env.EnvironmentPostProcessor=\
- de.juplo.demos.multiplebeans.MultipleBeansEnvironmentPostProcessor
-
-That's it, your done!
-You can find the whole source code in a working mini-application on juplo.de and GitHub:
- -BeanDefinitionRegistryPostProcessor -Based on a very simple example-project -we will implemnt the Outbox-Pattern with Kafka. -
--In this part, a small example-project is introduced, that features a component, which has to inform another component upon every succsessfully completed operation.
--In this mini-series I will implement the Outbox-Pattern -as described on Chris Richardson's fabolous website microservices.io. -
--The pattern enables you, to send a message as part of a database transaction in a reliable way, effectively turining the writing of the data -to the database and the sending of the message into an atomic operation: -either both operations are successful or neither. -
--The pattern is well known and implementing it with Kafka looks like an easy straight forward job at first glance. -However, there are many obstacles that easily lead to an incomplete or incorrect implementation. -In this blog-series, we will circumnavigate these obstacles together step by step. -
--To illustrate our implementation, we will use a simple example-project. -It mimics a part of the registration process for an web application: -a (very!) simplistic service takes registration orders for new users. -
-Location-header:
-echo peter | http :8080/users
-
-HTTP/1.1 201
-Content-Length: 0
-Date: Fri, 05 Feb 2021 14:44:51 GMT
-Location: http://localhost:8080/users/peter
-
-echo peter | http :8080/users
-
-HTTP/1.1 400
-Connection: close
-Content-Length: 0
-Date: Fri, 05 Feb 2021 14:44:53 GMT
-
-http :8080/users
-
-HTTP/1.1 200
-Content-Type: application/json;charset=UTF-8
-Date: Fri, 05 Feb 2021 14:53:59 GMT
-Transfer-Encoding: chunked
-
-[
- {
- "created": "2021-02-05T10:38:32.301",
- "loggedIn": false,
- "username": "peter"
- },
- ...
-]
-
--As our messaging use-case imagine, that there has to happen several processes after a successful registration of a new user. -This may be the generation of an invoice, some business analytics or any other lengthy process that is best carried out asynchronously. -Hence, we have to generate an event, that informs the responsible services about new registrations. -
--Obviously, these events should only be generated, if the registration is completed successfully. -The event must not be fired, if the registration is rejected, because a duplicate username. -
--On the other hand, the publication of the event must happen reliably, because otherwise, the new might not be charged for the services, we offer... -
-
-The users are stored in a database and the creation of a new user happens in a transaction.
-A "brilliant" colleague came up with the idea, to trigger an IncorrectResultSizeDataAccessException to detect duplicate usernames:
-
User user = new User(username);
-repository.save(user);
-// Triggers an Exception, if more than one entry is found
-repository.findByUsername(username);
-
-
-The query for the user by its names triggers an IncorrectResultSizeDataAccessException, if more than one entry is found.
-The uncaught exception will mark the transaction for rollback, hence, canceling the requested registration.
-The 400-response is then generated by a corresponding ExceptionHandler:
-
@ExceptionHandler
-public ResponseEntity> incorrectResultSizeDataAccessException(
- IncorrectResultSizeDataAccessException e)
-{
- LOG.info("User already exists!");
- return ResponseEntity.badRequest().build();
-}
-
--Please do not code this at home... -
--But his weired implementation perfectly illustrates the requirements for our messaging use-case: -The user is written into the database. -But the registration is not successfully completed until the transaction is commited. -If the transaction is rolled back, no message must be send, because no new user was registered. -
-
-In the example implementation I am using an EventPublisher to decouple the business logic from the implementation of the messaging.
-The controller publishes an event, when a new user is registered:
-
publisher.publishEvent(new UserEvent(this, usernam));
-
-
-A listener annotated with @TransactionalEventListener receives the events and handles the messaging:
-
@TransactionalEventListener
-public void onUserEvent(UserEvent event)
-{
- // Sending the message happens here...
-}
-
--In non-critical use-cases, it might be sufficient to actually send the message to Kafka right here. -Spring ensures, that the message of the listener is only called, if the transaction completes successfully. -But in the case of a failure this naive implementation can loose messages. -If the application crashes, after the transaction has completed, but before the message could be send, the event would be lost. -
--In the following blog posts, we will step by step implement a solution based on the Outbox-Pattern, that can guarantee Exactly-Once semantics for the send messages. -
--The complete source code of the example-project can be cloned here: -
-git clone https://juplo.de/git/demos/spring/data-jdbcgit clone https://github.com/juplo/demos-spring-data-jdbc.git-It includes a Setup for Docker Compose, that can be run without compiling -the project. And a runnable README.sh, that compiles and run the application and illustrates the example. -
]]>-Based on a very simple example-project -we will implemnt the Outbox-Pattern with Kafka. -
--In this part, we will implement the outbox (aka: the queueing of the messages in a database-table). -
--The outbox is represented by an additionall table in the database. -This table acts as a queue for messages, that should be send as part of the transaction. -Instead of sending the messages, the application stores them in the outbox-table. -The actual sending of the messages occures outside of the transaction. -
--Because the messages are read from the table outside of the transaction context, only entries related to sucessfully commited transactions are visible. -Hence, the sending of the message effectively becomes a part of the transaction. -It happens only, if the transaction was successfully completed. -Messages associated to an aborted transaction will not be send. -
-No special measures need to be taken when writing the messages to the table. -The only thing to be sure of is that the writing takes part in the transaction. -
--In our implementation, we simply store the serialized message, together with a key, that is needed for the partitioning of your data in Kafka, in case the order of the messages is important. -We also store a timestamp, that we plan to record as Event Time later. -
--One more thing that is worth noting is that we utilize the database to create an unique record-ID. -The generated unique and monotonically increasing id is required later, for the implementation of Exactly-Once semantics. -
--
The SQL for the table looks like this:
-CREATE TABLE outbox (
- id BIGINT PRIMARY KEY AUTO_INCREMENT,
- key VARCHAR(127),
- value varchar(1023),
- issued timestamp
-);
-
--In order to decouple the business logic from the implementation of the messaging mechanism, I have implemented a thin layer, that uses Spring Application Events to publish the messages. -
-
-Messages are send as a subclass of ApplicationEvent:
-
publisher.publishEvent(
- new UserEvent(
- this,
- username,
- CREATED,
- ZonedDateTime.now(clock)));
-
-
-The event takes a key (username) and an object as value (an instance of an enum in our case).
-An EventListener receives the events and writes them in the outbox table:
-
@TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
-public void onUserEvent(OutboxEvent event)
-{
- try
- {
- repository.save(
- event.getKey(),
- mapper.writeValueAsString(event.getValue()),
- event.getTime());
- }
- catch (JsonProcessingException e)
- {
- throw new RuntimeException(e);
- }
-}
-
-
-The @TransactionalEventListener is not really needed here.
-A normal EventListener would also suffice, because spring immediately executes all registered normal event listeners.
-Therefore, the registered listeners would run in the same thread, that published the event, and participate in the existing transaction.
-
-But if a @TransactionalEventListener is used, like in our example project, it is crucial, that the phase is switched to BEFORE_COMMIT when the Outbox Pattern is introduced.
-This is, because the listener has to be executed in the same transaction context in which the event was published.
-Otherwise, the writing of the messages would not be coupled to the success or abortion of the transaction, thus violating the idea of the pattern.
-
-
--Since this part of the implementation only stores the messages in a normal database, it can be published as an independent component that does not require any dependencies on Kafka. -To highlight this, the implementation of this step does not use Kafka at all. -In a later step, we will separate the layer, that decouples the business code from our messaging logic in a separate package. -
--The complete source code of the example-project can be cloned here: -
-git clone -b part-1 https://juplo.de/git/demos/spring/data-jdbcgit clone -b part-1 https://github.com/juplo/demos-spring-data-jdbc.git-This version only includes the logic, that is needed to fill the outbox-tabel. -Reading the messages from this table and sending them through Kafka will be the topic of the next part of this blog-series. -
--The sources include a Setup for Docker Compose, that can be run without compiling -the project. And a runnable README.sh, that compiles and run the application and illustrates the example. -
]]>