Before you plan your installation or upgrade, let us understand TeamForge and its services.
A TeamForge site consists of a core TeamForge application and several tightly integrated services that support it. In addition, you can integrate TeamForge with other third party applications such as Nexus, Jenkins, Jira and so on. Some of the TeamForge services are mandatory and some are optional. You can install the services, all in one single server, or distribute them across two or more servers.
- The core TeamForge application provides the Web interface that users see, and the API that other applications can interact with. It also includes the file system where some user content is stored, such as wiki pages.
- The site database is where most of the user-created content is stored and accessed. Documents, discussion posts, tracker artifacts, project administration settings: all that sort of thing lives in the database.
- The source control server ties any number of Subversion, Git/Gerrit or CVS repositories into the TeamForge site.
- The Extract Transform and Load (ETL) server pulls data from the site database and populates the datamart to generate charts and graphs about how people are using the site. The datamart (Reports DB) is an abstraction of the site database, optimized to support the reporting functionality.
- EventQ is a TeamForge capability that provides traceability for product life cycle activities such as work items, SCM commits, continuous integration (CI) builds, and code reviews.
Note: EventQ is not installed by default when you install TeamForge 19.0 or later. However, you can install EventQ separately, if required. For more information, see TeamForge installation/upgrade instructions.
- Baseline is a TeamForge capability that lets you create snapshot of selected configuration items from a given TeamForge project at a given point in time. For more information, see TeamForge Baseline.
- TeamForge Webhooks-based Event Broker, which is also referred to as the integration broker, is a webhooks-based message broker that pushes the messages of specific events received from a Publisher to a Subscriber. For more information, see TeamForge Webhooks-based Event Broker.
Here’s a list of available TeamForge services.
|ctfcore||Mandatory||app||Main TeamForge application server|
|search||Mandatory||indexer||Indexing and searching|
|Mandatory||NA (added in TeamForge 17.1)||Email server|
|ctfcore-database-mirror||Optional||NA||Mirror of operational database|
|etl||Optional||etl||ETL for Datamart|
|ctfcore-datamart||Mandatory if and only if you install
|subversion||Optional||subversion||SVN Version Control|
|cvs||Optional||cvs||CVS Version Control|
|gerrit||Optional||gerrit||Git/Gerrit Version Control|
|gerrit-database||Mandatory if and only if you install
||NA (added in TeamForge 17.1)||Database for Git/Gerrit. In a distributed setup, add this identifier to the server where you want to run Gerrit database.
In a distributed setup with multiple Git integration servers, add this identifier to all the servers that run the Git databases. For more information, see host:SERVICES token.
|binary||Optional||Optional||Artifact repository integration|
|binary-database||Mandatory if and only if you install
||binary||Database for artifact repository integration. Binary app (binary) and database (binary-database) have to be installed on the same server.|
|reviewboard||Optional||reviewboard||Review Board code review tool|
|reviewboard-database||Mandatory if and only if you install
||NA (added in TeamForge 17.1)||Database for Review Board. In a distributed setup, add this identifier to the server where you want to run Review Board database.|
|reviewboard-adapter||Mandatory if and only if you install
||NA||Adapter for reviewboard to copy
|eventq||Optional||NA (added in TeamForge 17.4)||EventQ application server. In a distributed setup, add this identifier to the server where you want to run EventQ application.|
|redis||Mandatory if and only if you install
||NA (added in TeamForge 17.4)||EventQ in-memory database/data structure server. In a distributed setup, add this identifier to the server where you want to run EventQ application.|
|mongodb||Mandatory if and only if you install
||NA (added in TeamForge 17.4)||EventQ database server. In a distributed setup, add this identifier to the server where you want to run EventQ database.|
|rabbitmq||Mandatory if and only if you install
||NA (added in TeamForge 17.4)||EventQ AMQP message server. In a distributed setup, add this identifier to the server where you want to run EventQ message queue.|
|baseline||Optional||NA||Baseline service. In a distributed setup, add this identifier to the server where you want to run the Baseline application.|
|baseline-database||Mandatory if and only if you install
||NA||Baseline database service. In a distributed setup, add this identifier to the server where you want to run the Baseline database.|
|baseline-post-install||Mandatory if and only if you install
||NA||Baseline service that is used to synchronize user information between the Baseline and TeamForge databases.|
|webr||Mandatory. The WEBR application is installed by default when you install or upgrade to TeamForge.||NA||Webhooks-based Event Broker service that is used to push the messages of specific events received from a Publisher to a Subscriber.|
|webr-database||Mandatory.||NA||Database service for the TeamForge Webhooks-based Event Broker.|
These service identifiers are used in the
host:SERVICES token. For more information, see host:SERVICES token.
In addition, installing TeamForge with service-specific FQDNs (instead of machine-specific host/domain names) is highly recommended so that you will be able to change the system landscape at a later point in time without having any impact on the URLs (in other words, end users do not have to notice or change anything). For example, you can create FQDNs specifically for services such as Subversion, Git, mail, Codesearch and so on. For more information, see Service-specific FQDNs.
Single Server or Distributed Setup?
If you are installing TeamForge, are you planning to install on a single server or distribute TeamForge services across two or more servers? How are you going to distribute the services?
In the default setup, all services run on the same server as the main TeamForge application. But in practice, only the TeamForge application needs to run on the TeamForge application server. The other services can share that server or run on other servers, in almost any combination.
Assess your own site’s particular use patterns and resources to decide how to distribute your services, if at all. For example, if you anticipate heavy use of your site, you will want to consider running the site database, the source control service, or the reporting engine on separate hardware to help balance the load. For examples on how to distribute TeamForge services, see host:SERVICES token.
When you distribute your services on multiple servers, you must do some configuration to handle communication between the services. Verify your basic networking setup. See Set Up Networking for TeamForge.
PostgreSQL or Oracle?
PostgreSQL 11.1 is installed automatically when you install TeamForge 19.3. If you intend to use Oracle, CollabNet recommends that you let the installer run its course, make sure things work normally, and then set up your Oracle database and switch over to it.
If you want to use Oracle as your database, consider the following points:
- TeamForge 19.3 supports Oracle server 12c and Oracle client 12c.
- Oracle express edition is not supported for both client and server.
- Review Board was tested with PostgreSQL 11.1 only. Review Board with Oracle was not tested.
- Git integration works only with PostgreSQL. The Git integration uses PostgreSQL even if your TeamForge site uses Oracle.
The efficiency of your database can have an impact on your users’ perception of the site’s usability. If your site uses a PostgreSQL database (which is the default), you may want to consider tuning it to fit your specific circumstances. The default settings are intended for a small-to-medium site running on a single server. See What are the right PostgreSQL settings for my site? for recommendations from CollabNet’s performance team on optimizing PostgreSQL for different conditions.
TeamForge supports integration with a wide array of third party applications such as Nexus, Jira and so on. As a customer, you may or may not always want (or have) all of TeamForge’s supported integrated applications. It’s also quite possible that some of the integrated applications may not always run on all the platforms supported by TeamForge. To accommodate a wider audience, by default, TeamForge install and upgrade instructions include steps to integrate such third party applications with TeamForge.
However, use your discretion to ignore and skip such steps if they are not relevant to your site. See TeamForge Installation Requirements to understand what it takes to run TeamForge 19.3 with integrations.
One-hop Upgrade Compatibility
Though the TeamForge 19.3 installer supports one-hop upgrade from TeamForge 18.2 or later versions, TeamForge 19.3 upgrade instructions, in general, are for upgrading from TeamForge 19.2 (including update releases, if any) to TeamForge 19.3.
There is no support for one-hop upgrade from TeamForge 18.1 or earlier to TeamForge 19.3. You must upgrade your site to TeamForge 18.2 or later and then upgrade to TeamForge 19.3.
Hardware and Backup
If you aren’t the person who first installed your current TeamForge site (or maybe, even if you are), it’s essential to catalog the hosts where your services are running and to know what configuration has been applied to them.
While upgrading to a latest TeamForge version, you can choose to upgrade on the same hardware or on new hardware. In general, it is good to have a backup plan in place. Same hardware upgrades need no backup. However, it’s recommended to take a back up as a measure of caution. See Back up and Restore TeamForge for more information.
Other Dos and Don’ts
Here’s a list of dos, don’ts and points to remember when you install or upgrade TeamForge.
- Understand TeamForge installation requirements and plan your installation or upgrade.
- Get your TeamForge license key and keep it handy.
- Verify your basic networking setup before installing or upgrading TeamForge. See Set Up Networking for TeamForge.
- Look for new or modified
site-options.conftokens and update your
site-options.conffile as required during the upgrade process. See Site Options Change Log.
- Set up a TeamForge Stage Server before you upgrade your Production Server.
- Stop TeamForge services on all servers in a distributed setup while upgrading to TeamForge 19.3.
- Uninstall hot fixes and add-ons, if any, before you start the TeamForge 19.3 upgrade procedure.
As a result of changes to the logging framework in Java 9, the
PrintGCTimeStampslogging options are no longer supported. Remove these options from the following tokens while upgrading to TeamForge 18.1 or later. TeamForge provision fails otherwise.
- Do not customize your operating system installation. Select only the default packages list.
- While upgrading TeamForge, whether in place or on new hardware, always reuse the old
site-options.conffile and make changes as necessary. Do not try to start with a new
site-options.conffile. Reusing the old
site-options.confavoids many potential problems, particularly around the management of usernames and passwords.
- Do not manually modify TeamForge-managed site option tokens such as the
AUTO_DATAtoken. See AUTO_DATA for more information.
If you are creating symlinks, note that you must create symlinks only to the TeamForge data directory (
/opt/collabnet/teamforge/var). You should not create symlinks to TeamForge application directories (such as
Points to Remember
- Installing or upgrading TeamForge needs root privileges. You must log on as root or use a root shell to install or upgrade TeamForge.
- SSL is enabled by default and a self-signed certificate is auto-generated. However, you can use a few
site-options.conftokens to adjust this behavior. To generate the SSL certificates, see Generate SSL Certificates.
- For the ETL service to run as expected in a distributed TeamForge installation, all servers must have the same time zone.
- If you have Git integration on a separate server, both TeamForge and Git servers must have their time and date synchronized. Similarly, if Subversion is on a separate server, both TeamForge and Subversion servers must have their time and date synchronized.
- While you can run both EventQ and TeamForge on the same server, CollabNet recommends such an approach only for testing purposes. It’s always recommended to run EventQ on a separate server for optimal scalability.
- It’s highly recommended that you install the TeamForge Baseline services on a separate server as the baselining process can consume considerable CPU and database resources. For more information, see Install TeamForge in a Distributed Setup.
- No backup is required for same hardware upgrades. However, you can create a backup as a measure of caution. See Back up and Restore TeamForge for more information.
- Always use compatible JDBC drivers meant for specific database versions. See JDBC Drivers Reference for more information. Also see: Why do ETL jobs fail post TeamForge upgrade?
- You can run the initial load job any time after the installation of TeamForge. We recommend that you run it before you hand over the site to the users. For more information, see ETL Initial Load Jobs.
- SOAP50 APIs and event handlers are no longer supported in TeamForge 16.10 and later. Use the latest TeamForge SOAP/REST APIs.
- TeamForge 19.3 installer expects the system locale to be
LANG=en_US.UTF-8. TeamForge create runtime (
teamforge provision) fails otherwise.
- Installing TeamForge with service-specific FQDNs (instead of machine-specific host/domain names) is highly recommended so that you will be able to change the system landscape at a later point in time without having any impact on the URLs (in other words, end users do not have to notice or change anything). For example, you can create FQDNs specifically for services such as Subversion, Git, mail, Codesearch and so on. For more information, see Service-specific FQDNs.
- All such service-specific FQDNs must belong to a single sub domain and it is recommended to create a new sub domain for TeamForge.
- If you are using service-specific FQDNs
- A wildcard SSL cert is required. SNI SSL cert cannot be used.
- When SSL is enabled and no custom SSL certificates are provided, a self-signed wildcard cert is generated for the sub domain.
- When SSL is enabled and a custom SSL certificate is provided, the CN of the certificate is verified to be a wildcard CN.
- You cannot have a separate PUBLIC_FQDN for EventQ.
- The ability to run separate PostgreSQL instances for TeamForge database and datamart on the same server is being deprecated in TeamForge 17.11. If you have TeamForge database and datamart on separate PostgreSQL instances on the same server and if you are upgrading on a new hardware, you must Create a Single Cluster for Both Database and Datamart while upgrading to TeamForge 17.11 or later.
- While upgrading TeamForge-Git integration servers, it is important that Git master and slave servers are upgraded to the same version of TeamForge-Git integration. On sites with Git Replica Servers, you must upgrade the Git Replica Servers first and then upgrade the master Git servers.
- EventQ is not installed by default when you install TeamForge 19.0 or later. However, you can install EventQ separately, if required. EventQ installation instructions are included in the TeamForge installation/upgrade instructions, which you can ignore if not required for your setup.
- From TeamForge 19.3, TeamForge Webhooks-based Event Broker is installed automatically when you install/upgrade TeamForge. In other words, you don’t have to run the command
yum install teamforge-webrseparately.
- TeamForge supports Monit for monitoring services and recovering failed services. Monit is installed on the TeamForge Application server to monitor the health of services and restart the services when they fail. Monit log file is located at