Actuator HTTP Trace Does Not Work With Spring Boot 2.2.x

TL;DR

In Spring Boot 2.2.x, you have to instanciate a @Bean of type InMemoryHttpTraceRepository to enable the HTTP Trace Actuator.

Jump to the explanation of and example code for the fix

Enabling HTTP Trace — Before 2.2.x...

Spring Boot comes with a very handy feature called Actuator.
Actuator provides a build-in production-ready REST-API, that can be used to monitor / menage / debug your bootified App.
To enable it — prior to 2.2.x —, one only had to:

  1. Specifiy the dependency for Spring Boot Actuator:

    <dependency>
      <groupId>org.springframework.boot
      <artifactId>spring-boot-starter-actuator
    </dependency>
    
    
  2. Expose the needed endpoints via HTTP:

    management.endpoints.web.exposure.include=*
    
    • This exposes all available endpoints via HTTP.
    • Advise: Do not copy this into a production config
      (Without thinking about it twice and — at least — enable some security measures to protect the exposed endpoints!)

The problem: It simply does not work any more in 2.2 :(

But…

  • If you upgrade your existing app with a working httptrace-actuator to Spring Boot 2.2.x, or
  • If you start with a fresh app in Spring Boot 2.2.x and try to enable the httptrace-actuator as described in the documentation

…it simply does not work at all!

The Fix

The simple fix for this problem is, to add a @Bean of type InMemoryHttpTraceRepository to your @Configuration-class:

@Bean
public HttpTraceRepository htttpTraceRepository()
{
  return new InMemoryHttpTraceRepository();
}

The Explanation

The cause of this problem is not a bug, but a legitimate change in the default configuration.
Unfortunately, this change is not noted in the according section of the documentation.
Instead it is burried in the Upgrade Notes for Spring Boot 2.2

The default-implementation stores the captured data in memory.
Hence, it consumes much memory, without the user knowing, or even worse: needing it.
This is especially undesirable in cluster environments, where memory is a precious good.
And remember: Spring Boot was invented to simplify cluster deployments!

That is, why this feature is now turned of by default and has to be turned on by the user explicitly, if needed.

Create Self-Signed Multi-Domain (SAN) Certificates

TL;DR

The SAN-extension is removed during signing, if not respecified explicitly.
To create a private CA with self-signed multi-domain certificats for your development setup, you simply have to:

  1. Run create-ca.sh to generate the root-certificate for your private CA.
  2. Run gencert.sh NAME to generate selfsigned certificates for the CN NAME with an exemplary SAN-extension.

Subject Alternative Name (SAN) And Self-Signed Certificates

Multi-Domain certificates are implemented as a certificate-extension called Subject Alternative Name (SAN).
One can simply specify the additional domains (or IP’s) when creating a certificate.

The following example shows the syntax for the keytool-command, that comes with the JDK and is frequently used by Java-programmers to create certificates:

keytool \
 -keystore test.jks -storepass confidential -keypass confidential \
 -genkey -alias test -validity 365 \
 -dname "CN=test,OU=security,O=juplo,L=Juist,ST=Niedersachsen,C=DE" \
 -ext "SAN=DNS:test,DNS:localhost,IP:127.0.0.1"

If you list the content of the newly created keystore with…

keytool -list -v -keystore test.jks

…you should see a section like the following one:

#1: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
  DNSName: test
  DNSName: localhost
  IPAddress: 127.0.0.1
]

The certificate is also valid for this additionally specified domains and IP’s.

The problem is, that it is not signed and will not be trusted, unless you publicize it explicitly through a truststore.
This is feasible, if you just want to authenticate and encrypt one point-2-point communication.
But if more clients and/or servers have to be authenticated to each other, updating and distributing the truststore will soon become hell.

The common solution in this situation is, to create a private CA, that can sign newly created certificates.
This way, only the root-certificate of that private CA has to be distributed.
Clients, that know the root-certificate of the private CA will automatically trust all certificates, that are signed by that CA.

But unfortunatly, if you sign your certificate, the SAN-extension vanishes: the signed certificate is only valid for the CN.
(One may think, that you just have to specify the export of the SAN-extension into the certificate-signing-request – which is not exported by default – but the SAN will still be lost after signing the extended request…)

This removal of the SAN-extension is not a bug, but a feature.
A CA has to be in control, which domains and IP’s it signes certificates for.
If a client could write arbitrary additional domains in the SAN-extension of his certificate-signing-request, he could fool the CA into signing a certificate for any domain.
Hence, all entries in a SAN-extension are removed by default during signing.

This default behavior is very annoying, if you just want to run your own private CA, to authenticate all your services to each other.

In the following sections, I will walk you through a solution to circumvent this pitfall.
If you just need a working solution for your development setup, you may skip the explanation and just download the scripts, that combine the presented steps.

Recipe To Create A Private CA With Self-Signed Multi-Domain Certificates

Create And Distribute The Root-Certificate Of The CA

We are using openssl to create the root-certificate of our private CA:

openssl req \
  -new -x509 -subj "/C=DE/ST=Niedersachsen/L=Juist/O=juplo/OU=security/CN=Root-CA" \
  -keyout ca-key -out ca-cert -days 365 -passout pass:extraconfidential

This should create two files:

  • ca-cert, the root-certificate of your CA
  • ca-key, the private key of your CA with the password extraconfidential

Be sure to protect ca-key and its password, because anyone who has access to both of them, can sign certificates in the name of your CA!

To distribute the root-certificate, so that your Java-clients can trust all certificates, that are signed by your CA, you have to import the root-certificate into a truststore and make that truststore available to your Java-clients:

keytool \
  -keystore truststore.jks -storepass confidential \
  -import -alias ca-root -file ca-cert -noprompt

Create A Certificate-Signing-Request For Your Certificat

We are reusing the already created certificate here.
If you create a new one, there is no need to specify the SAN-extension, since it will not be exported into the request and this version of the certificate will be overwritten, when the signed certificate is reimported:

keytool \
  -keystore test.jks -storepass confidential \
  -certreq -alias test -file cert-file

This will create the file cert-file, which contains the certificate-signing-request.
This file can be deleted, after the certificate is signed (which is done in the next step).

Sign The Request, Adding The Additional Domains In A SAN-Extension

We use openssl x509 to sign the request:

openssl x509 \
  -req -CA ca-cert -CAkey ca-key -in cert-file -out test.pem \
  -days 365 -CAcreateserial -passin pass:extraconfidential \
  -extensions SAN -extfile <(printf "\n[SAN]\nsubjectAltName=DNS:test,DNS:localhost,IP:127.0.0.1")

This can also be done with openssl ca, which has a slightly different and little bit more complicated API.
openssl ca is ment to manage a real full-blown CA.
But we do not need the extra options and complexity for our simple private CA.

The important part here is all that comes after -extensions SAN.
It specifies the Subject-Alternative-Name-section, that we want to include additionally into the signed certificate.
Because we are in full control of our private CA, we can specify any domains and/or IP’s here, that we want.
The other options are ordinary certificate-signing-stuff, that is already better explained elswhere.

We use a special syntax with the option -extfile, that allows us to specify the contents of a virtual file as part of the command.
You can as well write your SAN-extension into a file and hand over the name of that file here, as it is done usually.
If you want to specify the same SAN-extension in a file, that file would have to contain:

[SAN]
subjectAltName=DNS:test,DNS:localhost,IP:127.0.0.1

Note, that the name that you give the extension on the command-line with -extension SAN has to match the header in the (virtual) file ([SAN]).

As a result of the command, the file test.pem will be created, which contains the signed x509-certificate.
You can disply the contents of that certificate in a human readable form with:

openssl x509 -in test.pem -text

It should display something similar to this example-output

Import The Root-Certificate Of The CA And The Signed Certificate Into The Keystore

If you want your clients, that do only know the root-certificate of your CA, to trust your Java-service, you have to build up a Chain-of-Trust, that leads from the known root-certificate to the signed certificate, that your service uses to authenticate itself.
(Note: SSL-encryption always includes the authentication of the service a clients connects to through its certificate!)
In our case, that chain only has two entries, because our certificate was directly signed by the root-certificate.
Therefore, you have to import the root-certificate (ca-cert) and your signed certificate (test.pem) into a keystore and make that keystore available to the Java-service, in order to enable it to authentificate itself using the signed certificate, when a client connects.

Import the root-certificate of the CA:

keytool \
 -keystore test.jks -storepass confidential \
 -import -alias ca-root -file ca-cert -noprompt

Import the signed certificate (this will overwrite the unsigned version):

keytool \
 -keystore test.jks -storepass confidential \
 -import -alias test -file test.pem

That’s it: we are done!

You can validate the contents of the created keystore with:

keytool \
 -keystore test.jks -storepass confidential \
 -list -v

It should display something similar to this example-output

To authenticate service A against client B you will have to:

  • make the keystore test.jks available to the service A
  • make the truststore truststore.jks available to the client B

If you want, that your clients also authentificate themselfs to your services, so that only clients with a trusted certificate can connect (2-Way-Authentication), client B also needs its own signed certificate to authenticate against service A and service A also needs access to the truststore, to be able to trust that certificate.

Simple Example-Scripts To Create A Private CA And Self-Signed Certificates With SAN-Extension

The following two scripts automate the presented steps and may be useful, when setting up a private CA for Java-development:

  • Run create-ca.sh to create the root-certificate for the CA and import it into a truststore (creates ca-cert and ca-key and the truststore truststore.p12)
  • Run gencert.sh CN to create a certificate for the common name CN, sign it using the private CA (also exemplarily adding alternative names) and building up a valid Chain-of-Trust in a keystore (creates CN.pem and the keystore CN.p12)
  • Global options can be set in the configuration file settings.conf

Read the source for more options…

Differing from the steps shown above, these scripts use the keystore-format PKCS12.
This is, because otherwise, keytool is nagging about the non-standard default-format JKS in each and every step.

Note: PKCS12 does not distinguish between a store-password and a key-password. Hence, only a store-passwort is specified in the scripts.

Encrypt Communication Between Kafka And ZooKeeper With TLS

TL;DR

  1. Download and unpack zookeeper+tls.tgz.
  2. Run README.sh for a fully automated example of the presented setup.

Copy and paste to execute the two steps on Linux:

curl -sc - https://juplo.de/wp-uploads/zookeeper+tls.tgz | tar -xzv && cd zookeeper+tls && ./README.sh

A german translation of this article can be found on http://trion.de.

Current Kafka Cannot Encrypt ZooKeeper-Communication

Up until now (Version 2.3.0 of Apache Kafka) it is not possible, to encrypt the communication between the Kafka-Brokers and their ZooKeeper-ensemble.
This is not possiible, because ZooKeeper 3.4.13, which is shipped with Apache Kafka 2.3.0, lacks support for TLS-encryption.

The documentation deemphasizes this, with the observation, that usually only non-sensitive data (configuration-data and status information) is stored in ZooKeeper and that it would not matter, if this data is world-readable, as long as it can be protected against manipulation, which can be done through proper authentication and ACL’s for zNodes:

The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of znodes can cause cluster disruption. (Kafka-Documentation)

This quote obfuscates the elsewhere mentioned fact, that there are use-cases that store sensible data in ZooKeeper.
For example, if authentication via SASL/SCRAM or Delegation Tokens is used.
Accordingly, the documentation often stresses, that usually there is no need to make ZooKeeper accessible to normal clients.
Nowadays, only admin-tools need direct access to the ZooKeeper-ensemble.
Hence, it is stated as a best practice, to make the ensemble only available on a local network, hidden behind a firewall or such.

In cleartext: One must not run a Kafka-Cluster, that spans more than one data-center — or at least make sure, that all communication is tunneled through a virtual private network.

ZooKeeper 3.5.5 To The Rescue

On may the 20th 2019, version 3.5.5 of ZooKeeper has been released.
Version 3.5.5 is the first stable release of the 3.5.x branch, that introduces the support for TLS-encryption, the community has yearned for so long.
It supports the encryption of all communication between the nodes of a ZooKeeper-ensemble and between ZooKeeper-Servers and -Clients.

Part of ZooKeeper is a sophisticated client-API, that provide a convenient abstraction for the communication between clients and servers over the Atomic Broadcast Protocol.
The TLS-encryption is applied by this API transparently.
Because of that, all client-implementations can profit from this new feature through a simple library-upgrade from 3.4.13 to 3.5.5.

This article will walk you through an example, that shows how to carry out such a library-upgrade for Apache Kafka 2.3.0 and configure a cluster to use TLS-encryption, when communicating with a standalone ZooKeeper.

Disclaimer

The presented setup is ment for evaluation only!

It fiddles with the libraries, used by Kafka, which might cause unforseen issues.
Furthermore, using TLS-encryption in ZooKeeper requires one to switch from the battle-tested NIOServerCnxnFactory, which uses the NIO-API directly, to the newly introduced NettyServerCnxnFactory, which is build on top of Netty.

Recipe To Enable TLS Between Broker And ZooKeeper

The article will walk you step by step through the setup now.
If you just want to evaluate the example, you can jump to the download-links.

All commands must be executed in the same directory.
We recommend, to create a new directory for that purpose.

Download Kafka and ZooKeeper

First of all: Download version 2.3.0 of Apache Kafka and version 3.5.5 of Apache ZooKeeper:

curl -sc - http://ftp.fau.de/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz | tar -xzv
curl -sc - http://ftp.fau.de/apache/kafka/2.3.0/kafka_2.12-2.3.0.tgz | tar -xzv

Switch Kafka 2.3.0 from ZooKeeper 3.4.13 to ZooKeeper 3.5.5

Remove the 3.4.13-version from the libs-directory of Apache Kafka:

rm -v kafka_2.12-2.3.0/libs/zookeeper-3.4.14.jar

Then copy the JAR’s of the new version of Apache ZooKeeper into that directory. (The last JAR is only needed for CLI-clients, like for example zookeeper-shell.sh.)

cp -av apache-zookeeper-3.5.5-bin/lib/zookeeper-3.5.5.jar kafka_2.12-2.3.0/libs/
cp -av apache-zookeeper-3.5.5-bin/lib/zookeeper-jute-3.5.5.jar kafka_2.12-2.3.0/libs/
cp -av apache-zookeeper-3.5.5-bin/lib/netty-all-4.1.29.Final.jar kafka_2.12-2.3.0/libs/
cp -av apache-zookeeper-3.5.5-bin/lib/commons-cli-1.2.jar kafka_2.12-2.3.0/libs/

That is all there is to do to upgrade ZooKeeper.
If you run one of the Kafka-commands, it will use ZooKeeper 3.5.5. from now on.

Create A Private CA And The Needed Certificates


You can read more about setting up a private CA in this post

Create the root-certificate for the CA and store it in a Java-truststore:

openssl req -new -x509 -days 365 -keyout ca-key -out ca-cert -subj "/C=DE/ST=NRW/L=MS/O=juplo/OU=kafka/CN=Root-CA" -passout pass:superconfidential
keytool -keystore truststore.jks -storepass confidential -import -alias ca-root -file ca-cert -noprompt

The following commands will create a self-signed certificate in zookeeper.jks.
What happens is:

  1. Create a new key-pair and certificate for zookeeper
  2. Generate a certificate-signing-request for that certificate
  3. Sign the request with the key of private CA and also add a SAN-extension, so that the signed certificate is also valid for localhost
  4. Import the root-certificate of the private CA into the keystore zookeeper.jks
  5. Import the signed certificate for zookeeper into the keystore zookeeper.jks


You can read more about creating self-signed certificates with multiple domains and building a Chain-of-Trust here

NAME=zookeeper
keytool -keystore $NAME.jks -storepass confidential -alias $NAME -validity 365 -genkey -keypass confidential -dname "CN=$NAME,OU=kafka,O=juplo,L=MS,ST=NRW,C=DE"
keytool -keystore $NAME.jks -storepass confidential -alias $NAME -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out $NAME.pem -days 365 -CAcreateserial -passin pass:superconfidential -extensions SAN -extfile <(printf "\n[SAN]\nsubjectAltName=DNS:$NAME,DNS:localhost")
keytool -keystore $NAME.jks -storepass confidential -import -alias ca-root -file ca-cert -noprompt
keytool -keystore $NAME.jks -storepass confidential -import -alias $NAME -file $NAME.pem

Repeat this with:

  • NAME=kafka-1
  • NAME=kafka-2
  • NAME=client

Now we have signed certificates for all participants in our small example, that are stored in separate keystores, each with a Chain-of-Trust set up, that is rooting in our private CA.
We also have a truststore, that will validate all these certificates, because it contains the root-certificate of the Chain-of-Trust: the certificate of our private CA.

Configure And Start ZooKeeper

We hightlight/explain only the configuration-options here, that are needed for TLS-encryption!

In our setup, the standalone ZooKeeper essentially needs two specially tweaked configuration files, to use encryption.

Create the file java.env:

SERVER_JVMFLAGS="-Xms512m -Xmx512m -Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory"
ZOO_LOG_DIR=.
  • The Java-Environmentvariable zookeeper.serverCnxnFactory switches the connection-factory to use the Netty-Framework.
    Without this, TLS is not possible!

Create the file zoo.cfg:

dataDir=/tmp/zookeeper
secureClientPort=2182
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.X509AuthenticationProvider
ssl.keyStore.location=zookeeper.jks
ssl.keyStore.password=confidential
ssl.trustStore.location=truststore.jks
ssl.trustStore.password=confidential
  • secureClientPort: We only allow encrypted connections!
    (If we want to allow unencrypted connections too, we can just specify clientPort additionally.)
  • authProvider.1: Selects authentification through client certificates
  • ssl.keyStore.*: Specifies the path to and password of the keystore, with the zookeeper-certificate
  • ssl.trustStore.*: Specifies the path to and password of the common truststore with the root-certificate of our private CA

Copy the file log4j.properties into the current working directory, to enable logging for ZooKeeper (see also java.env):

cp -av apache-zookeeper-3.5.5-bin/conf/log4j.properties .

Start the ZooKeeper-Server:

apache-zookeeper-3.5.5-bin/bin/zkServer.sh --config . start
  • --config .: The script should search in the current directory for the configration data and certificates.

Konfigure And Start The Brokers


We hightlight/explain only the configuration-options and start-parameters here, that are needed to encrypt the communication between the Kafka-Brokers and the ZooKeeper-Server!

The other parameters shown here, that are concerned with SSL are only needed for securing the communication between the Brokers itself and between Brokers and Clients.
You can read all about them in the standard documentation.
In short: This example is set up, to use SSL for authentication between the brokers and SASL/PLAIN for client-authentification — both channels are encrypted with TLS.

TLS for the ZooKeeper Client-API is configured through Java-Environmentvariables.
Hence, most of the SSL-configuration for connecting to ZooKeeper has to be specified, when starting the broker.
Only the address and port for the connction itself is specified in the configuration-file.

Create the file kafka-1.properties:

broker.id=1
zookeeper.connect=zookeeper:2182
listeners=SSL://kafka-1:9193,SASL_SSL://kafka-1:9194
security.inter.broker.protocol=SSL
ssl.client.auth=required
ssl.keystore.location=kafka-1.jks
ssl.keystore.password=confidential
ssl.key.password=confidential
ssl.truststore.location=truststore.jks
ssl.truststore.password=confidential
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required user_consumer="pw4consumer" user_producer="pw4producer";
sasl.enabled.mechanisms=PLAIN
log.dirs=/tmp/kafka-1-logs
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=2
  • zookeeper.connect: If you allow unsecure connections too, be sure to specify the right port here!
  • All other options are not relevant for encrypting the connections to ZooKeeper

Start the broker in the background and remember its PID in the file KAFKA-1:

(
  export KAFKA_OPTS="
    -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
    -Dzookeeper.client.secure=true
    -Dzookeeper.ssl.keyStore.location=kafka-1.jks
    -Dzookeeper.ssl.keyStore.password=confidential
    -Dzookeeper.ssl.trustStore.location=truststore.jks
    -Dzookeeper.ssl.trustStore.password=confidential
  "
  kafka_2.12-2.3.0/bin/kafka-server-start.sh kafka-1.properties & echo $! > KAFKA-1
) > kafka-1.log &

Check the logfile kafka-1.log to confirm that the broker starts without errors!

  • zookeeper.clientCnxnSocket: Switches from NIO to the Netty-Framework.
    Without this, the ZooKeeper Client-API (just like the ZooKeeper-Server) cannot use TLS!
  • zookeeper.client.secure=true: Switches on TLS-encryption, for all connections to any ZooKeeper-Server
  • zookeeper.ssl.keyStore.*: Specifies the path to and password of the keystore, with the kafka-1-certificate
  • zookeeper.ssl.trustStore.*: Specifies the path to and password of the common truststore with the root-certificate of our private CA


Do the same for kafka-2!
And do not forget, to adapt the config-file accordingly — or better: just download a copy...

Configure And Execute The CLI-Clients

All scripts from the Apache-Kafka-Distribution that connect to ZooKeeper are configured in the same way as seen for kafka-server-start.sh.
For example, to create a topic, you will run:

export KAFKA_OPTS="
  -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
  -Dzookeeper.client.secure=true
  -Dzookeeper.ssl.keyStore.location=client.jks
  -Dzookeeper.ssl.keyStore.password=confidential
  -Dzookeeper.ssl.trustStore.location=truststore.jks
  -Dzookeeper.ssl.trustStore.password=confidential
"
kafka_2.12-2.3.0/bin/kafka-topics.sh \
  --zookeeper zookeeper:2182 \
  --create --topic test \
  --partitions 1 --replication-factor 2

Note: A different keystore is used here (client.jks)!

CLI-clients, that connect to the brokers, can be called as usual.

In this example, they use an encrypted listener on port 9194 (for kafka-1) and are authenticated using SASL/PLAIN.
The client-configuration is kept in the files consumer.config and producer.config.
Take a look at that files and compare them with the broker-configuration above.
If you want to lern more about securing broker/client-communication, we refere you to the official documentation.


If you have trouble to start these clients, download the scripts and take a look at the examples in README.sh

TBD: Further Steps To Take...

This recipe only activates TLS-encryption between Kafka-Brokers and a Standalone ZooKeeper.
It does not show, how to enable TLS between ZooKeeper-Nodes (which should be easy) or if it is possible to authenticate Kafka-Brokers via TLS-certificates. These topics will be covered in future articles...

Fully Automated Example Of The Presented Setup

Download and unpack zookeeper+tls.tgz for an evaluation of the presented setup:

curl -sc - https://juplo.de/wp-uploads/zookeeper+tls.tgz | tar -xzv

The archive contains a fully automated example.
Just run README.sh in the unpacked directory.

It downloads the required software, carries out the library-upgrade, creates the required certificates and starts a standalone ZooKeeper and two Kafka-Brokers, that use TLS to encrypt all communication.
It also executes a console-consumer and a console-producer, that read and write to a topic, and a zookeeper-shell, that communicates directly with the ZooKeeper-node, to proof, that the setup is working.
The ZooKeeper and the Brokers-instances are left running, to enable the evaluation of the fully encrypted cluster.

Usage

  • Run README.sh, to execute the automated example
  • After running README.sh, the Kafka-Cluster will be still running, so that one can experiment with commands from README.sh by hand
  • README.sh can be executed repeatedly: it will skip all setup-steps, that are already done automatically
  • Run README.sh stop, to stop the Kafka-Cluster (it can be restarted by re-running README.sh)
  • Run README.sh cleanup, to stop the Cluster and remove all created files and data (only the downloaded packages will be left untouched)

Separate Downloads For The Packaged Files

XPath 2.0 deep-equal Does Not Match Like Expected – The Problem With Whitespace

I just stumbled accros a problem with the deep-equal()-method introduced by XPath 2.0.
It costs me two hours at minimum to find out, what was going on.
So I want to share this with you, in case your are wasting time on the same problem and try to find a solution via google ;)

If you never heard of deep-equal() and just wonder how to compare XML-nodes in the right way, you should probably read this exelent article about equality in XSLT as a starter.

My Problem

My problem was, that I wanted to parse/output a node only, if there exists no node on the ancestor-axis, that has a exact duplicate of that node as a direct child.

The Difference Between A Comparison With = And With deep-equal()

If you just use simple equality (with = or eq), the two compared nodes are converted into strings implicitly.
That is no problem, if you are comparing attributes, or nodes, that only contain text.
But in all other cases, you will only compare the text-contents of the two nodes and their children.
Hence, if they differ only in an attribute, your test will report that they are equal, which might not be what you are expecting.

For example, the XPath-expression

//child/ref[ancestor::parent/ref=.]

will match the <ref>-node with @id='bar', that is nested insiede the <child>-node in this example-XML, what I was not expecting:

<root>
  <parent>
    <ref id="foo"><content>Same Text-Content</content></ref>
    <child>
      <ref id="bar"><content>Same Text-Content</content></ref>
    </child>
  <parent>
<list>

So, what I tried, after I found out about deep-equal() was the following Xpath-expression, which solves the problem in the above example:

//child/ref[deep-equal(ancestor::parent/ref,.)]

The Unexpected Behaviour Of deep-equal()

But, moving on I stumbled accross cases, where I was expecting a match, but deep-equal() does not match the nodes.
For example:

<root>
  <parent>
    <ref id="same">
      <content>Same Text-Content</content>
    </ref>
    <child>
      <ref id="same">
        <content>Same Text-Content</content>
      </ref>
    </child>
  <parent>
<list>

You probably catch the diffrenece at first glance, since I laid out the examples accordingly and gave you a hint in the heading of this post – but it really took me a long time to get that:

It is all about whitespace!

deep-equal() compares all child-nodes and only yields a match, if the compared nodes have exactly the same child-nodes.
But in the second example, the compared <ref>-nodes contain whitespace befor and after their child-node <content>.
And these whitespace are in fact implicite child-nodes of type text.
Hence, the two nodes in the second example differe, because the indentation on the second one has two more spaces.

The solution…?

Unfortunatly, I do not really know a good solution.
(If you come up with one, feel free to note or link it in the comments!)

The best solution would be an option additional argument for deep-equal(), that can be selected to tell the function to ignore such whitespace.
In fact, some XSLT-parsers do provide such an argument.

The only other solution, I can think of, is, to write another XSLT-script to remove all the whitespaces between tags to circumvent this at the first glance unexpected behaviour of deep-equal()

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

Show Spring-Boot Auto-Configuration-Report When Running Via “mvn spring-boot:run”

There are a lot of explanations, how to turn on the Auto-Configuration-Report offered by Spring-Boot to debug the configuration of ones app.
For an good example take a look at this little Spring boot troubleshooting auto-configuration guide.
But most often, when I want to see the Auto-Configuration-Report, I am running my app via mvn:spring-boot:run.
And, unfortunatly, none of the guids you can find by google tells you, how to turn on the Auto-Configuration-Report in this case.
Hence, I hope I can help out, with this little tip.

How To Turn On The Auto-Configuration-Report When Running mvn spring-boot:run

The report is shown, if the logging for org.springframework.boot.autoconfigure.logging is set to DEBUG.
The most simple way to do that, is to add the following line to your src/main/resources/application.properties:

logging.level.org.springframework.boot.autoconfigure.logging=DEBUG

I was not able, to enable the logging via a command-line-switch.
The seemingly obvious way to add the property to the command line with a -D like this:

mvn spring-boot:run -Dlogging.level.org.springframework.boot.autoconfigure.logging=DEBUG

did not work for me.
If anyone could point out, how to do that in a comment to this post, I would be realy grateful!

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

Problems Deploying A Spring-Boot-App As WAR

Spring-Boot-App Is Not Started, When Deployed As WAR

Recently, I had a lot of trouble, deploying my spring-boot-app as war under Tomcat 8 on Debian Jessie.
The WAR was found and deployed by tomcat, but it was never started.
Browsing the URL of the app resulted in a 404.
And instead of the fancy Spring-Boot ASCII-art banner, the only matching entry that showed up in my log-file was:

INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Spring WebApplicationInitializers detected on classpath: [org.springframework.boot.autoconfigure.jersey.JerseyAutoConfiguration$JerseyWebApplicationInitializer@1fe086c]

A blog-post from Stefan Isle lead me to the solution, what was going wrong.
In my case, there was no wrong version of Spring on the classpath.
But my WebApplicationInitializer was not found, because I had it compiled with a version of Java, that was not available on my production system.

WebApplicationInitializer Not Found Because Of Wrong Java Version

On my development box, I had compiled and tested the WAR with Java 8.
But on my production system, running Debian 8 (Jessie), only Java 7 was available.
And because of that, my WebApplicationInitializer

After installing Java 8 from debian-backports on my production system, like described in this nice debian-upgrade note, the WebApplicationInitializer of my App was found and everything worked like a charme, again.

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

hibernate-maven-plugin 2.0.0 released!

Today we released the version 2.0.0 of hibernate-maven-plugin to Central!

Why Now?

During one of our other projects ‐ the development of a vertical search-engine for events and locations, which is funded by the mistery of economy of NRW ‐, we realized, that we were in the need of Hibernate 5 and some of the more sophisticated JPA-configuration-options.

Unfortunatly ‐ for us ‐ the old releases of this plugin neither support Hibernate 5 nor all configuration options, that are available for use in the META-INF/persistence.xml.

Fortunatly ‐ for you ‐ we decided, that we really need all that and have to integrate it in our little plugin.

Nearly Complete Rewrite

Due to changes in the way Hibernate has to be configured internally, this release is a nearly complete rewrite.
It was no longer possible, to just use the SchemaExport-Tool to build up the configuration and support all possible configuration-approaches.
Hence, the plugin now builds up the configuration using Services and Registries, like described in the Integration Guide.

Simplified Configuration: No Drop-In-Replacement!

We also took the opportunity, to simplify the configuration.
Beforehand, the plugin had just used the configuration, that was set up in the class SchemaExport.
This reliefed us from the burden, to understand the configuration internals, but brought up some oddities of the internal implementation of the tool.
It also turned out to be a bad decision in the long run, because some configuration options are hard coded in that class and cannot be changed.

By building up the whole configuration by hand, it is now possible to implement separate goals for creating and dropping the schema.
Also, it enables us to add a goal update in one of the next releases.
Because of all this improvements, you have to revise your configuration, if you want to switch from 1.x to 2.x.

Be warned: this release is no drop-in replacement of the previous releases!

Not Only For 4, But For Any Version

While rewirting the plugin, we focused on Hibernate 5, which was not supported by the older releases, because of some of the oddities of the internal implementation of the SchemaExport-tool.
We tried to maintain backward compatibility.

You should be able to use the new plugin with Hibernate 5 and also with older versions of Hibernate (we only tested that for Hibernate 4).
Because of that, we dropped the 4 in the name of the plugin!

Extended Support For JPA-Configurations

We tried to support all possible configuration-approaches, that Hibernate 5 understands.
Including hard coded XML-mapping-files in the META-INF/persistence.xml, that do not seem to be used very often, but which we needed in one of our own projects.

Therefore, the plugin now understands all (or most of?) the relevant configuration options, that one can specify through a standard JPA-configuration.
The plugin now should work with any configuration, that you drop in from your existing JPA- or Hibernate-projects.
All recognized configuration from the different possible configuration-sources are merged together, considering the configuration-method-precedence, described in the documentation.

We hope, we did not make any unhandy assumptions, while designing the merge-process.
Please let us know, if something wents wrong in your projects and you think it is, because we messed it up!

Release notes:

commit 64b7446c958efc15daf520c1ca929c6b8d3b8af5
Author: Kai Moritz 
Date:   Tue Mar 8 00:25:50 2016 +0100

    javadoc hat to be configured multiple times for release:prepare

commit 1730d92a6da63bdcc81f7a1c9020e73cdc0adc13
Author: Kai Moritz 
Date:   Tue Mar 8 00:13:10 2016 +0100

    Added the special javadoc-tags for maven-plugins to the configuration

commit 0611db682bc69b80d8567bf9316668a1b6161725
Author: Kai Moritz 
Date:   Mon Mar 7 16:01:59 2016 +0100

    Updated documentation

commit a275df25c52fdb7b5b4275fcf9a359194f7b9116
Author: Kai Moritz 
Date:   Mon Mar 7 17:56:16 2016 +0100

    Fixed missing menu on generated site: moved template from skin to project

commit e8263ad80b1651b812618c964fb02f7e5ddf3d7e
Author: Kai Moritz 
Date:   Mon Mar 7 14:44:53 2016 +0100

    Turned of doclint, that was introduced in Java 8
    
    See: http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html

commit 62ec2b1b98d5ce144f1ac41815b94293a52e91e6
Author: Kai Moritz 
Date:   Tue Dec 22 19:56:41 2015 +0100

    Fixed ConcurrentModificationException

commit 9d6e06c972ddda45bf0cd2e6a5e11d8fa319c290
Author: Kai Moritz 
Date:   Mon Dec 21 17:01:42 2015 +0100

    Fixed bug regarding the skipping of unmodified builds
    
    If a property or class was removed, its value or md5sum stayed in the set
    of md5sums, so that each following build (without a clean) was juged as
    modified.

commit dc652540d007799fb23fc11d06186aa5325058db
Author: Kai Moritz 
Date:   Sun Dec 20 21:06:37 2015 +0100

    All packages up to the root are checked for annotations

commit 851ced4e14fefba16b690155b698e7a39670e196
Author: Kai Moritz 
Date:   Sun Dec 20 13:32:48 2015 +0100

    Fixed bug: the execution is no more skipped after a failed build
    
    After a failed build, further executions of the plugin were skipped, because
    the MD5-summs suggested, that nothing is to do because nothing has changed.
    Because of that, the MD5-summs are now removed in case of a failure.

commit 08649780d2cd70f2861298d683aa6b1945d43cda
Author: Kai Moritz 
Date:   Sat Dec 19 18:02:02 2015 +0100

    Mappings from JPA-mapping-files are considered

commit bb8b638714db7fc02acdc1a9032cc43210fe5c0e
Author: Kai Moritz 
Date:   Sat Dec 19 03:46:49 2015 +0100

    Fixed minor misconfiguration in integration-test dependency test
    
    Error because of multiple persistence-units by repeated execution

commit 3a7590b8862c3be691b05110f423865f6674f6f6
Author: Kai Moritz 
Date:   Thu Dec 17 03:10:33 2015 +0100

    Considering mapping-configuration from persistence.xml and hibernate.cfg.xml

commit 23668ccaa93bfbc583c1697214bae116bd9f4ef6
Author: Kai Moritz 
Date:   Thu Dec 17 02:53:38 2015 +0100

    Sidestepped bug in Hibernate 5

commit 8e5921c9e76b4540f1d4b75e05e338001145ff6d
Author: Kai Moritz 
Date:   Wed Dec 16 22:09:00 2015 +0100

    Introduced the goal "drop"
    
     * Fixed integration-test hibernate4-maven-plugin-envers-sample by adapting
       it to the new drop-goal
     * Adapted the other integration-tests to the new naming schema for the
       create-script

commit 6dff3bfb0f9ea7a1d0cc56398aaad29e31a17b91
Author: Kai Moritz 
Date:   Wed Dec 16 18:08:56 2015 +0100

    Reworked configuration and the tracking thereof
    
     * Moved common parameters from CreateMojo to AbstractSchemaMojo
     * Reordered parameters into sensible groups
     * Renamed the maven-property-names of the parameters
     * All configuration-parameters are tracked, not only hibernate-parameters
     * Introduced special treatment for some of the plugin-parameters (export
       and show)

commit b316a5b4122c3490047b68e1e4a6df205645aad5
Author: Kai Moritz 
Date:   Wed Oct 21 11:49:56 2015 +0200

    Reworked plugin-configuration: worshipped the DRY-principle

commit 4940080670944a15916c68fb294e18a6bfef12d5
Author: Kai Moritz 
Date:   Fri Oct 16 12:16:30 2015 +0200

    Refined reimplementation of the plugin for Hibernate 5.x
    
    Renamed the plugin from hibernate4-maven-plugin to hibernate-maven-plugin,
    because the goal is, to support all recent older versions with the new
    plugin.

commit fdda82a6f76deefd10f83da89d7e82054e3c3ecd
Author: Kai Moritz 
Date:   Wed Oct 21 12:18:29 2015 +0200

    Integration-Tests are skiped, if "maven.test.skip" is set to true

commit b971570e28cbdc3b27eca15a7395586bee787446
Author: Kai Moritz 
Date:   Tue Sep 8 13:55:43 2015 +0200

    Updated version of juplo-skin for generation of documentation

commit 3541cf3742dd066b94365d351a3ca39a35e3d3c8
Author: Kai Moritz 
Date:   Tue May 19 21:41:50 2015 +0200

    Added new configuration sources in documentation about precedence


Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

Release Of A Maven-Plugin to Maven Central Fails With “error: unknown tag: goal”

error: unknown tag: goal

Releasing a maven-plugin via Maven Central does not work, if you have switched to Java 8.
This happens, because hidden in the oss-parent, that you have to configure as parent of your project to be able to release it via Sonatype, the maven-javadoc-plugin is configured for you.
And the version of javadoc, that is shipped with Java 8, by default checks the syntax of the comments and fails, if anything unexpected is seen.


Unfortunatly, the special javadoc-tag’s, like @goal or @phase, that are needed to configure the maven-plugin, are unexpected for javadoc.

Solution 1: Turn Of The Linting Again

As described elswehere, you can easily turn of the linting in the plugins-section of your pom.xml:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-javadoc-plugin</artifactId>
  <version>2.7</version>
  <configuration>
    <additionalparam>-Xdoclint:none</additionalparam>
  </configuration>
</plugin>

Solution 2: Tell javadoc About The Unknown Tags

Another not so well known approach, that I found in a fix for an issue of some project, is, to add the unknown tag’s in the configuration of the maven-javadoc-plugin:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-javadoc-plugin</artifactId>
  <version>2.7</version>
  <configuration>
    <tags>
      <tag>
        <name>goal</name>
        <placement>a</placement>
        <head>Goal:</head>
      </tag>
      <tag>
        <name>phase</name>
        <placement>a</placement>
        <head>Phase:</head>
      </tag>
      <tag>
        <name>threadSafe</name>
        <placement>a</placement>
        <head>Thread Safe:</head>
      </tag>
      <tag>
        <name>requiresDependencyResolution</name>
        <placement>a</placement>
        <head>Requires Dependency Resolution:</head>
      </tag>
      <tag>
        <name>requiresProject</name>
        <placement>a</placement>
        <head>Requires Project:</head>
      </tag>
    </tags>
  </configuration>
</plugin>

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

Develop a Facebook-App with Spring-Social – Part VII: What is Going On On The Wire

In this series of Mini-How-Tow’s I will describe how to develop a facebook app with the help of Spring-Social

In the last part of this series, I showed you, how you can sign-in your users through the signed_request, that is send to your canvas-page.

In this part, I will show you, how to turn on logging of the HTTP-requests, that your app sends to, and the -responses it recieves from the Facebook Graph-API.

The Source is With You

You can find the source-code on http://juplo.de/git/examples/facebook-app/
and browse it via gitweb.
Check out part-07 to get the source for this part of the series.

Why You Want To Listen On The Wire

If you are developing your app, you will often wonder, why something does not work as expected.
In this case, it is often very usefull to be able to debug the communitation between your app and the Graph-API.
But since all requests to the Graph-API are secured by SSL you can not simply listen in with tcpdump or wireshark.

Fortunately, you can turn on the debugging of the underling classes, that process theses requests, to sidestep this problem.

Introducing HttpClient

In its default-configuration, the Spring Framework will use the HttpURLConnection, which comes with the JDK, as http-client.
As described in the documentation, some advanced methods are not available, when using HttpURLConnection
Besides, the package HttpClient, which is part of Apaches HttpComponents is a much more mature, powerful and configurable alternative.
For example, you easily can plug in connection pooling, to speed up the connection handling, or caching to reduce the amount of requests that go through the wire.
In production, you should always use this implementation, instead of the default-one, that comes with the JDK.

Hence, we will switch our configuration to use the HttpClient from Apache, before turning on the debug-logging.

Switching From Apaches HttpCompnents To HttpClient

To siwtch from the default client, that comes with the JDK to Apaches HttpClient, you have to configure an instance of HttpComponentsClientHttpRequestFactory as HttpRequestFactory in your SocialConfig:

@Bean
public HttpComponentsClientHttpRequestFactory requestFactory(Environment env)
{
  HttpComponentsClientHttpRequestFactory factory =
      new HttpComponentsClientHttpRequestFactory();
  factory.setConnectTimeout(
      Integer.parseInt(env.getProperty("httpclient.timeout.connection"))
      );
  factory.setReadTimeout(
      Integer.parseInt(env.getProperty("httpclient.timeout.read"))
      );
  return factory;
}

To use this configuration, you also have to add the dependency org.apache.httpcomonents:httpclient in your pom.xml.

As you can see, this would also be the right place to enable other specialized configuration-options.

Logging The Headers From HTTP-Requests And Responses

I configured a short-cut to enable the logging of the HTTP-headers of the communication between the app and the Graph-API.
Simply run the app with the additionally switch -Dhttpclient.logging.level=DEBUG

Take Full Control

If the headers are not enough to answer your questions, you can enable a lot more debugging messages.
You just have to overwrite the default logging-levels.
Read the original documentation of HttpClient, for more details.

For example, to enable logging of the headers and the content of all requests, you have to start your app like this:

mvn spring-boot:run \
    -Dfacebook.app.id=YOUR_ID \
    -Dfacebook.app.secret=YOUR_SECRET \
    -Dfacebook.app.canvas=https://apps.facebook.com/YOUR_NAMESPACE
    -Dlogging.level.org.apache.http=DEBUG \
    -Dlogging.level.org.apache.http.wire=DEBUG

The second switch is necessary, because I defined the default-level ERROR for that logger in our src/main/application.properties, to enable the short-cut for logging only the headers.

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

Develop a Facebook-App with Spring-Social – Part VI: Sign In Users Through The Canvas-Page

In this series of Mini-How-Tow’s I will describe how to develop a facebook app with the help of Spring-Social

In the last part of this series, we refactored our authentication-concept, so that it can be replaced by Spring Security later on more easy.

In this part, we will turn our app into a real Facebook-App, that is rendered inside Facebook and signs in users through the signed_request.

The Source is With You

You can find the source-code on http://juplo.de/git/examples/facebook-app/
and browse it via gitweb.
Check out part-06 to get the source for this part of the series.

What The *#&! Is a signed_request

If you add the platform Facebook Canvas to your app, you can present your app inside of Facebook.
It will be accessible on a URL like https://apps.facebook.com/YOUR_NAMESPACE then and if a (known!) user accesses this URL, facebook will send a signed_request, that already contains some data of this user an an authorization to retrieve more.

Sign In Users With signed_request In 5 Simple Steps

As I first tried to extend the simple example, this article-series is based on, I stumbled across multiple misunderstandings.
But now, as I guided you around all that obstacles, it is fairly easy to refine our app, so that is can sign in users through the signed_request, send to a Canvas-Page.

You just have to:

  1. Add the platform “Facebook Canvas” in the settings of your app and choose a canvas-URL.
  2. Reconfigure your app to support HTTPS, because Facebook requires the canvas-URL to be secured by SSL.
  3. Configure the CanvasSignInController.
  4. Allow the URL of the canvas-page to be accessed unauthenticated.
  5. Enable Sign-Up throw your canvas-page.

That is all, there is to do.
But now, step by step…

Step 1: Turn Your App Into A Canvas-Page

Go to the settings-panel of your app on https://developers.facebook.com/apps and click on Add Platform.
Choose Facebook Canvas.
Pick a secure URL, where your app will serve the canvas-page.

For example: https://localhost:8443.

Be aware, that the URL has to be publicly available, if you want to enable other users to access your app.
But that also counts for the Website-URL http://localhost:8080, that we are using already.

Just remember, if other people should be able to access your app later, you have to change these URL’s to something, they can access, because all the content of your app is served by you, not by Facebook.
A Canvas-App just embedds your content in an iFrame inside of Facebook.

Step 2: Reconfigure Your App To Support HTTPS

Add the following lines to your src/main/resources/application.properties:

server.port: 8443
server.ssl.key-store: keystore
server.ssl.key-store-password: secret

I have included a self-signed keystore with the password secret in the source, that you can use for development and testing.
But of course, later, you have to create your own keystore with a certificate that is signed by an official certificate authority, that is known by the browsers of your users.

Since your app now listens on 8443 an uses HTTPS, you have to change the URL, that is used for the platform “Website”, if you want your sign-in-page to continue to work in parallel to the sign-in through the canvas-page.

For now, you can simply change it to https://locahost:8443/ in the settings-panel of your app.

Step 3: Configure the CanvasSignInController

To actually enable the automatic handling of the signed_request, that is, decoding the signed_request and sign in the user with the data provided in the signed_request, you just have to add the CanvasSignInController as a bean in your SocialConfig:

@Bean
public CanvasSignInController canvasSignInController(
    ConnectionFactoryLocator connectionFactoryLocator,
    UsersConnectionRepository usersConnectionRepository,
    Environment env
    )
{
  return
      new CanvasSignInController(
          connectionFactoryLocator,
          usersConnectionRepository,
          new UserCookieSignInAdapter(),
          env.getProperty("facebook.app.id"),
          env.getProperty("facebook.app.secret"),
          env.getProperty("facebook.app.canvas")
          );
}

Step 4: Allow the URL Of Your Canvas-Page To Be Accessed Unauthenticated

Since we have “secured” all of our pages except of our sign-in-page /signin*, so that they can only be accessed by an authenticated user, we have to explicitly allow unauthenticated access to our new special sign-in-page.

To achieve that, we have to refine our UserCookieInterceptor as follows.
First add a pattern for all pages, that are allowed to be accessed unauthenticated:

private final static Pattern PATTERN = Pattern.compile("^/signin|canvas");

Then match the requests against this pattern, instead of the fixed string /signin:

if (PATTERN.matcher(request.getServletPath()).find())
  return true;

Step 5: Enable Sign-Up Through Your Canvas-Page

Facebook always sends a signed_request to your app, if a user visits your app through the canvas-page.
But on the first visit of a user, the signed_request does not authenticate the user.
In this case, the only data that is presented to your page is the language and locale of the user and his or her age.

Because the data, that is needed to sign in the user, is missing, the CanvasSignInController will issue an explicit authentication-request to the Graph-API via a so called Server-Side Log-In.
This process includes a redirect to the Login-Dialog of Facebook and then a second redirect back to your app.
It requires the specification of a full absolute URL to redirect back to.

Since we are configuring the canvas-login-in, we want, that new users are redirected to the canvas-page of our app.
Hence, you should use the Facebook-URL of your app: https://apps.facebook.com/YOUR_NAMESPACE.
This will result in a call to your canvas-page with a signed_request, that authenticates the new user, if the user accepts to share the requested data with your app.

Any other page of your app would work as well, but the result would be a call to the stand-alone version of your app (the version of your app that is called the “Website”-platform of your app by Facebook), meaning, that your app is not rendered inside of Facebook.
Also it requires one more call of your app to the Graph-API to actually sign-in the new user, because Facebook sends the signed_request only the canvas-page of your app.

To specify the URL I have introduced a new attribute facebook.app.canvas that is handed to the CanvasSignInController.
You can specifiy it, when starting your app:

mvn spring-boot:run \
    -Dfacebook.app.id=YOUR_ID \
    -Dfacebook.app.secret=YOUR_SECRET \
    -Dfacebook.app.canvas=https://apps.facebook.com/YOUR_NAMESPACE

Be aware, that this process requires the automatic sign-up of new users, that we enabled in part 3 of this series.
Otherwise, the user would be redirected to the sign-up-page of your application, after he allowed your app to access the requested data.
Obviously, that would be very confusing for the user, so we really nead automati sign-up in this use-case!

Coming Next…

In the next part of this series, I will show you, how you can debug the calls, that Spring Social makes to the Graph-API, by turning on the debugging of the classes, that process the HTTP-requests and -responses, that your app is making.

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung