There is a fear that having several JVMs running instead of several threads in one and the same JVM will consume a vast amount of resources. This consumption can be divided in two major parts.
1. Having a vast amount of processes is in itself fairly resource consuming since each process needs its own process context. This overhead is already present due to the fact that each connection has its own process.
2. The overhead of the interpreter itself and the classes that it loads. Mixing connections as threads in one JVM implies that you either let those connections share the same ClassLoader chain, or you let each connection have its own set of ClassLoaders. The first approach results in a very weak isolation between the connections and difficulties maintaining security. The second approach will be almost as resource consuming as running the threads in separate JVMs.
In the Java community you are very likely to use a connection pool. The pool will ensure that the number of connections stays as low as possible and that connections are reused (instead of closed and reestablished). New JVMs are started rarely.
Separate JVMs gives you a much higher degree of isolation. There's no problem attaching a debugger to one connection (one JVM) while the others run unaffected. There's no chance that one connection manages to accidentally transfer dirty data to another connection. The JVMs can be brought down and restarted individually. Security policies are much easier to enforce.
Remote procedure calls is extremely expensive compared to in-process calls using JNI. In order for an update trigger to function, you can choose one of two approaches. Either you limit the number of RPC calls and send two full Tuples (old and new) and a Tuple Descriptor to the remote JVM, and then pass a third Tuple (the modified new) back to the original, or you pass those structures by reference (as CORBA remote objects) and perform one RPC call each time you access them.
Another example is if you use one or several Java functions in the projection of a SELECT statement on a Relation with several thousand rows. Each row will cause at least one call to the remote JVM. One of the reasons for using Java in the backend in the first place, is to keep the number of RPC calls as low as possible.
Using JNI to directly access structures like TriggerData, Relation, TupleDesc, and HeapTuple minimizes the amount of data that needs to be copied. Parameters or return values that are primitives need not even become Java objects. A 32-bit int4 Datum can be directly passed as a Java int (jint in JNI).
In order to maintain the correct visibility, you must either propagate the transaction to the remote process and be able to reestablish a JDBC connection that is attached to this transaction, or you must establish some kind of JDBC connection that calls back to the invoking process. Both choices results in an increased number of RPC calls.
My approach is to use the underlying SPI interfaces directly through JNI. I will provide a "pseudo connection" that implements the JDBC connection interface. From that, you will be able to work with the database using standard JDBC in your current transaction.
I've have some experience of work involving CORBA and other RPCs. They add a fair amount of complexity to the process.