General

 How flexible is Exalate?

Exalate uses a groovy-based scripting engine to implement filtering, transformation and mapping.  Even though scripting like that is more complex than a drag and drop interface, it is the only way that makes even the most complex mapping possible to implement. See how the most complex use cases can be implemented using the relation level processors.  Do you want to map statuses to comments check following blog 

 Is it robust and performant enough to scale and support enterprise level synchronizations?

 Exalate is already in use since 2014 at different enterprises.  One of our customers processes on average 12000 issues / month without significant impact on the overall instance

 Can you upgrade your own environment without affecting any remote configurations?

Exalate is based on a distributed model where each application administrator configures what information can be send and how incoming messages must be handled.  All messages exchanged between the nodes are based on a common defined 'hubissue' which carries all information one application administrator wants to communicate to the other side. The processors will use this information to apply the changes to the local issues.

 Is the synchronization status visible for the user - even when the synchronization failed?

 Absolutely.  The synchronization panel shows all relevant details - even if the remote issue has been removed.

 Can the configuration evolve independently?

 Thanks to the distributed model, it is possible to adapt the common hub issue to the local context.  For instance, if your local workflow evolves, you will be able to change the create and change processors to take into account the new configuration.  

 Are you planning to accommodate other issue trackers?

 Yes. Right now, Exalate is compatible with JIRA Server, JIRA Cloud and HP ALM.  We are working on the integration with other issue trackers.

 How complex is it to construct a complex scenario?

 A scenario we encounter regularly is the service desk use case where tickets raised by a customer need to be raised internally in different projects - depending on the set of properties. One use case was particularly interesting.  The target project depended on 3 different properties resulting in 200+ potential permutations.  This mapping is stored in a single database table.

The way we solved it involved finding the right projectkey in the database table and use it to raise the issue in the right project.  Check following example showing how such configuration can be set up using Exalate.

 Is it possible to synchronize contextual data of an issue (such as project, version, user information)?

Yes.  A hub issue contains all contextual data allowing to implement complex business logic.  Check out the detailed information a hub issue message transports such as:

 Is it possible to transition an issue whenever a certain comment is given?

Yes.  Thanks to the groovy goodness, one can parse the newly added comments and trigger a resolve transition

 

#
# Resolve the issue when a comment contains the word 'Resolve'
#
def resolveComments = replica.addedComments.findAll { comment -> comment.body.contains("Resolve")}
if (resolveComments.size() > 0) {
   workflowHelper.transition(issue,"Resolve")
}
 What type of information can be exchanged?

There are no limits to sending information using the customKeys property of a replica.   It allows to send any arbitrary object which will get serialized at one end and deserialized at the other.

 Are updates and deletions of comments, attachments, worklogs supported?

 Yes. A hubissue is representing standard fields but also a whole set of objects of comments, attachments ...

For instance. it is possible to list all the added and removed comments using the correct properties.  Check the details of what is contained in a replica here.

 Is it possible to pause a synchronization for maintenance purposes?

Yes - check the relation and/or instance configuration - these can be activated / deactivated if necessary. 

 How does the engine ensure that attachments are sent over correctly?

When a node requests another node to send over the content of an attachment, the other node will respond with an hash key calculated from the content of the attachment. The same hash key is calculated on the receiving end to ensure that the received content matches the original attachment.  The download will be retried once to avoid transmission errors.  Failing this second attempt an error is raised and the administrator is notified. 

 How will the solution behave in the case of a disaster recovery (where one side needs to restore a backup which is 1-day-old)?

There are 2 different situations to be considered

  • Any new issue brought under sync will be synchronized normally

  • All issues under sync will require a 'calibration phase' which allows to bring the synchronization back into shape. 

 How are the system administrators notified in case an error is raised?

The group of exalate administrators will be notified in 2 different ways

  • by email with the details of the error notification

  • In JIRA - by using 'In-JIRA' notifications - popping up a warning about the blocking error

Also, errors are generated at 4 different levels.

  • issue

  • As an example -  when the proxy user is not allowed to modify the local issue due to issue security

  • Relation

  • Typical example - when there is an error in the processor scripts, or when there is a permission problem to apply changes to an issue

  • Instance

  • A connection problem

  • Node

  • Bugs in the synchronization layer.

 How is the troubleshooting process supported?

Whenever an error occurs, a detailed overview (= stack trace) leading to the problem is raised.  In case this is not sufficient, one might create a support.zip and send it to support@exalate.com for further processing.

 How long does it take to synchronize an issue, when 10 other issues under sync have been updated at the same time?

This happens immediately. Exalate is based on a fully event-driven engine  Whenever a user changes some information (and that information needs to be sent), a sync event is generated containing a snapshot of the modified issue - ready to be sent to the remote instance.  When 10 issues are changed, or 1 issue 10 times at the same moment (it does happen) - 10 events are generated.

The replication layer in the exalate architecture will transmit each sync event to the remote instance, triggering a sync request, which is processed at its own pace.

Handling a full synchronization event takes a couple of seconds (give / take the size of the information to be transmitted)

 How long does it take to process all synchronization events when 10000 issues in 100 different projects have been updated?

The synchronization engine is processing sync events sequentially (for the moment - November 26. 2016).  Processing 10000 issues in 100 different projects might take a couple of hours.  We have been synchronizing 10k+ issues as part of our performance tests and we are looking at increasing the processing speed to meet the challenge that one of our prospects is facing.

 

 What is the storage overhead per issue?

Every replica of an issue is stored on 2 systems. A replica is the result of the transformation of an issue into a hub issue (through the datafilter)  This replica is stored:

  • on the originating tracker to detect if relevant changes have been made since the last synchronization

  • in the target tracker to provide the content of the previous version.

 What architecture is used to ensure independent upgrades of each tracker?

The distributed architecture of the synchronization engine allows to upgrade either end as appropriate.  The common hub issue model ensures that the synchronization information can be translated to the local context.

 Can the configuration of the local tracker change without impacting the configuration of the other trackers?

Yes.  Each integration point can evolve as long as the hub issue model and exalate api is respected.

 Does the solution support staging the configuration in another environment without affecting production synchronization?

Not yet - we are working on it.  When staging exalate 1.3 or earlier, make sure that synchronization engine is not triggered by deactivating all instances .

 Is the synchronization status made visible in a simple and straightforward way?

 Yes - the synchronization panel will detail out all information relevant of the synchronization

 Is the remote issue easily accessible?

Optionally.  The application administrator can configure how links to the remote issue are represented.  Either as JIRA issue link, as a weblink in the synchronization panel or not at all.

 What happens when the remote issue is deleted?

The deletion of an issue is considered as a synchronization event.  This will trigger a number of events which ensure that future synchronization events are not processed anymore.  The synchronization panel will indicate that the remote issue has been removed

 Does the third party require administrative access to be able to synchronize?

Not at all - every application administrator can define how 'hubissues' are translated to the local context

 Does Exalate support any other issue trackers or versions (both now and in the future)?

Currently, Exalate supports JIRA Core, JIRA Service Desk, JIRA Data Center, JIRA Software, JIRA Cloud, HP ALM. We are working on some other issue trackers to be compiled with Exalate 

 What is the impact when syncing with a less capable tracker - should you compromise on synchronization options?

No - the intermediate layer is taking care to transport whatever information the originating tracker wants to share.  It is up to the other side to take it into account. 

 

 

 

You are evaluating RefinedTheme.