Coder Social home page Coder Social logo

coherence-incubator's Introduction

This Repository Is No Longer Maintained and is Deprecated

The majority of functionality implemented by the various incubator modules is now available in the core Coherence product.

Coherence Incubator Source Repository

The Oracle Coherence Incubator Project defines a collection of examples, organized as Apache Maven modules, demonstrating advanced uses of Oracle Coherence.

Repository Structure

This repository contains many branches of development for the Oracle Coherence Incubator, each based on different major revisions of Oracle Coherence.

Coherence Incubator 13 (for Coherence 12.2.x)

Release Documentation: http://coherence-community.github.com/coherence-incubator/13.0.1/

Development Branch: develop-13

Coherence Incubator 12 (for Coherence 12.1.x)

Release Documentation: http://coherence-community.github.com/coherence-incubator/12.6.1/

Development Branch: develop-12

Coherence Incubator 11 (for Coherence 3.7.1.x)

Release Documentation: http://coherence-community.github.com/coherence-incubator/11.3.3/

Development Branch: develop-11

coherence-incubator's People

Contributors

brianoliver avatar lsho avatar thegridman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

coherence-incubator's Issues

Upgrade to use Oracle Tools 1.0.0

We need to upgrade to use Oracle Tools 1.0.0 as this provides several fixes and introduces support for improved Coherence-based JUnit Test Cluster Isolation.

Introduce a FailurePolicy (callback handler) to control what to do on failure.

This should be done in the event-distributor.

Ideally what we'd like to see is some kind of Failure Policy (call back) that provides:

a). the number of consecutive failures (thus far).
b). the amount of time those failures have occurred over.
c). the total number of failures.
d). the amount of time those failures have occurred over.
e). the amount of events currently queued.
f). the exception that occurred / caused the failure.
g). an option over what to do next, those being; "continue" to distributed events, "suspend" distribution of events, "stop" distribution of events.

Using this developers can control and intercept what should happen when failures occur.

The default FailurePolicy would be implemented much as we do now = suspend after a number of consecutive failures.

JndiNamespaceContentHandlerTest fails if machine has no internet access.

As reported by Richard Carless:

@Test
    public void testSimpleJNDILookup() throws NamingException
    {
        System.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.dns.DnsContextFactory");

        Context ctx = new InitialContext();

//        Assert.assertNotNull(ctx.lookup("dns:///www.oracle.com"));   /// <--- FAILS
    }

Instead we should change this to use a local address or a local file system.

Also... com.sun.jndi.dns.DnsContextFactory is an internal class and hence the generated warnings would go away.

Add TransactionEvent support for LiveObjects

Given that we have support for Partition-level Transactional Events, we should add support for this when using LiveObjects.

ie: We should provide the ability for a LiveObject to handle the transaction in which it is committed and committed.

@OnCommitting
public void onCommitting(Set<BinaryEntry> entries);

@OnCommitted
public void onCommitting(Set entries);

Annotated LiveObject methods should only receive a single Entry (not a set)

Currently the LiveObject annotated methods receive the underlying LiveEvent Event, that of which includes all of the entries. This is ok for simple events, but for those that contain multiple entries, each LiveObject receives every entry. This is undesirable as it's next to impossible to correlate the LiveObject and the entry.

Instead we should change the signature of the annotated methods to be:

public void onEvent(BinaryEntry entry);

And then change the LiveObjectEventInterceptor to pass in only appropriate BinaryEntry to the LiveObject.

Ensure CacheFactory.ensureCluster() calls are only made when starting a pattern

We need to make sure that #48 does not occur in other patterns. From what I can tell, it looks like the Processing Pattern is the only other place where this possibly happens.

To be able to redeploy a cluster member correctly you need to detach the member from the cluster programmatically calling CacheFactory.shutdown() when application is undeployed. Then the method CacheFactory.shutdown() will call the stop() methods of the distributed services that runs on the leaving member.

Because the stop() method is run by a service thread, no reentrant service calls should be invoked inside the stop method to avoid deadlocks.

THE PROBLEM
The CommandExecutor.stop() have a CacheFactory.ensureCluster() that is a service call within a service call (thus, a reentrant call)

public void stop() {
  if (Logger.isEnabled(Logger.DEBUG)) Logger.log(Logger.DEBUG, "Stopping CommandExecutor for %s", contextIdentifier); 

  //stop immediately  setState(State.Stopped);

  //this CommandExecutor must not be available any further to other threads   CommandExecutorManager.removeCommandExecutor(this.getContextIdentifier());

  //unregister JMX mbean for the CommandExecutor  Registry registry = CacheFactory.ensureCluster().getManagement(); // THIS IS THE SERVICE CALL   if (registry != null) {
      if (Logger.isEnabled(Logger.DEBUG)) Logger.log(Logger.DEBUG, "Unregistering JMX management extensions for CommandExecutor %s", contextIdentifier);  
      registry.unregister(getMBeanName());
  }

  if (Logger.isEnabled(Logger.DEBUG)) Logger.log(Logger.DEBUG, "Stopped CommandExecutor for %s", contextIdentifier);  
}

If the distributed service use to support the command pattern is configured to have a single thread (as it is by default). This call will produce a deadlock with a thread dump like this:

Thread[DistributedCache:DistributedCacheForCommandPattern|SERVICE_STOPPING,5,Cluster]
  com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:424)
  com.oracle.coherence.patterns.command.internal.CommandExecutor.stop(CommandExecutor.java:671)
        ...

DIAGNOSTIC (AND POTENTIAL SOLUTION)
I've changed the code of the CommandExecutor.stop() method to use a non blocking service call to obtain the Cluster

Registry registry = CacheFactory.getCluster() != null ? CacheFactory.getCluster().getManagement() : null;

Because CacheFactory.getCluster() is not a blocking service call the deadlock is avoided.

Calling CacheFactory.ensureCluster() on CommandExecutor.stop() hangs

To be able to redeploy a cluster member correctly you need to detach the member from the cluster programmatically calling CacheFactory.shutdown() when application is undeployed. Then the method CacheFactory.shutdown() will call the stop() methods of the distributed services that runs on the leaving member.

Because the stop() method is run by a service thread, no reentrant service calls should be invoked inside the stop method to avoid deadlocks.

THE PROBLEM
The CommandExecutor.stop() have a CacheFactory.ensureCluster() that is a service call within a service call (thus, a reentrant call)

public void stop() {
    if (Logger.isEnabled(Logger.DEBUG)) Logger.log(Logger.DEBUG, "Stopping CommandExecutor for %s", contextIdentifier); 

    //stop immediately  setState(State.Stopped);

    //this CommandExecutor must not be available any further to other threads   CommandExecutorManager.removeCommandExecutor(this.getContextIdentifier());

    //unregister JMX mbean for the CommandExecutor  Registry registry = CacheFactory.ensureCluster().getManagement(); // THIS IS THE SERVICE CALL   if (registry != null) {
        if (Logger.isEnabled(Logger.DEBUG)) Logger.log(Logger.DEBUG, "Unregistering JMX management extensions for CommandExecutor %s", contextIdentifier);  
        registry.unregister(getMBeanName());
    }

    if (Logger.isEnabled(Logger.DEBUG)) Logger.log(Logger.DEBUG, "Stopped CommandExecutor for %s", contextIdentifier);  
}

If the distributed service use to support the command pattern is configured to have a single thread (as it is by default). This call will produce a deadlock with a thread dump like this:

Thread[DistributedCache:DistributedCacheForCommandPattern|SERVICE_STOPPING,5,Cluster]
    com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:424)
    com.oracle.coherence.patterns.command.internal.CommandExecutor.stop(CommandExecutor.java:671)
        ...

DIAGNOSTIC (AND POTENTIAL SOLUTION)
I've changed the code of the CommandExecutor.stop() method to use a non blocking service call to obtain the Cluster

Registry registry = CacheFactory.getCluster() != null ? CacheFactory.getCluster().getManagement() : null;

Because CacheFactory.getCluster() is not a blocking service call the deadlock is avoided.

Ensure unit and functional tests isolate Coherence clusters

On some platforms (perhaps Oracle Enterprise Linux?), virtualized network infrastructure doesn't correctly respect Java network settings and thus correctly isolate Coherence clusters.

This may cause unexpected Coherence clustering to occur which may lead to test failures.

We need to make sure that we isolate every test (perhaps turn off clustering) to ensure we can build on those platforms.

For the most part this is not a problem on: Mac OS X, Ubuntu and Windows (XP or Vista) with Java 6, 7 or 8.

StaticFactoryClassSchemeBasedParameterizedBuilder.writeExternal(..) method is incorrect.

From the Oracle Forum: https://forums.oracle.com/forums/thread.jspa?threadID=2486558&tstart=0

I am trying to use static class factories and I found that StaticFactoryClassSchemeBasedParameterizedBuilder.writeExternal(...) is incorrect.

/**
* {@inheritDoc}
*/
@Override
public void writeExternal(PofWriter writer) throws IOException
{
writer.writeObject(1, factoryClassName);
writer.writeObject(2, factoryMethodName);
writer.writeObject(2, parameters);
}

when it should be writer.writeObject(3,parameters) (in readExternal it is 3)

Migrate Messaging Pattern Functional Tests from Internal Oracle Framework

We have a large number of tests that we should migrate over to use JUnit instead of the internal Oracle Framework.

It would be nice to have these as part of our automated builds instead of running them manually. Additionally it would be nice for the public to have access to them (because they can't access TestLogics and the testing infrastructure we have)

AbstractPushReplicationTest contains hardcoded developer path (narliss)

While running the Push Replication tests I noticed the following:

"The parent directory of the specified log file "/Users/narliss/dev/git/coherence-incubator/coherence-pushreplicationpattern/testActiveActiveCR-NY.log" does not exist; using System.out for log output instead."

Looking at the AbstractPushReplicationTest I found that we have a system log setting to use this file. This needs to be removed.

Introduce support for customized configuration merging for foreign namespaces

After doing some testing I discovered a "todo" in the XmlPreprocessingNamespaceHandler.mergeCacheConfig() method that describes how it should handle non-Coherence-based XML Namespaces (or rather does not handle) i.e. anything not a cache mapping or cache scheme.

Here's my suggestion. Probably the best place to handle this is by deferring it to the foreign (i.e. other namespace) NamespaceHandlers themselves. We could do this by introducing an optional interface for them to implement, that allows them to perform customized "merging"

LifecycleAwareEvent requires onFailure method

During a code review we discovered the requirement the LifecycleAwareEvents don't provide a callback should an Event cause an error/exception within a NonBlockingFiniteStateMachine.

There are two issues:

1). The interface requires the method
2). The existing methods don't provide information about the State they are in.

This small (breaking) change resolves this issue.

Application Runner incorrectly uses printf when it should use println.

As reported by Rich Carless:

I've been trying to run the samples from the incubator, however there is a bug in the application launcher code.

The problem is in Runner.java, the following line will try to print out the arguments in System.Properties

System.out.printf("Using System Properties : " + System.getProperties() + "\n");

However, because printf is used it tries to interpret the strings inside the system properties. So if there is a \s or any kind of escape sequence the code will throw an exception. Under windows the following system properties cause a problem

:\Windows%
system32;

I have changed the code to use println and all is fine.

Added support for Google Analytics tracking in the Documentation

We need to add the following script to the site.xml:

<script type="text/javascript">

 var _gaq = _gaq || [];
 _gaq.push(['_setAccount', 'UA-39051314-1']);
 _gaq.push(['_trackPageview']);

 (function() {
   var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
   ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
   var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
 })();

</script>

We should then re-release/update the 11.0.0 source code.

Enhance ConfigurableCacheFactory Tests

Jonathan Knight identified and provided some new tests for the ConfigurableCacheFactory that we should include in the Incubator, especially Incubator 12 that uses a different CCF implementation.

Push Replication / Event Distribution should not unnecessarily deserialize events

As discovered by Reon Campell, for some reason Push Replication / Event Distribution unnecessarily is deserializing events during the replication process. There is no requirement for this - unless a custom Transformer / Conflict Resolver is being used.

This issue makes it hard to use pure C++/.net-based applications as it forces developers to implement Java server-side classes.

Optionally restrict the max number of subscribers

It can sometimes be useful to restrict the amount of subscribers in a Queue destination. An example of that would be to create multiple queues with a single subscriber in each one, and hash an object to each one (something which can be a weak guarantee of in-order processing for multiple version of the same object).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.