Coder Social home page Coder Social logo

sig-core's Introduction

O3DE Core SIG - Meeting resources

Meeting notes from the O3DE Meetings.

General Resources

Open 3D Engine Special Interest Group for Core Systems (SIG Core)

This repository holds materials for the Open 3D Engine Foundation Special Interest Group for Core Systems (SIG-Core).

Description of SIG Core

We manage the Core Systems needed by O3DE and its libraries to fulfill the engines requirements. The SIG primarily owns low-level systems such as memory management, application startup and file input/output. See the SIG's charter for full details.

What the SIG does:

  • Contributes changes to O3DE Core Systems in the libraries of AzCore, AzFramework, AzToolsFramework, etc...
  • Collaborate with SIG-Simulation for changes to the Math library and Tick System.
  • Collaborate with SIG-Platform for implementation of low level systems(memory management, keyboard/mouse input, file input/output, etc...).
  • Maintains the Gem System and integrations with CMake.
  • Maintains the Unified Game Launcher.
  • Maintains the AZStd C++ standard library layer
  • Implements Core features such as the memory management system, reflection system, archive system, virtual filesystem layer, settings registry, etc...
  • Provides support to community for core system topics.
  • Works with other SIGs and the TSC on core systems issues.
  • Publish and maintain standards and best practices.

How do I get involved?

  • Contact us on Discord. The SIG has a specific chat channel #sig-core.
  • Attend any of our public meetings by connecting to the #sig-core voice channel on O3DE's Discord server. See the O3DE Calendar for upcoming meetings and details.
    • SIG-Core hosts a general meeting once a month, to discuss the core systems features, recently contributed work, pending RFCs, and other items of interest to SIG-Core. Look for the meeting titled SIG-Core Meeting on O3DE calendar. Meeting agendas are posted in advanced as GitHub issues and anyone can raise issues for discussion.
    • The SIG also meets once per week, on Wednesday at 9:00am PST/PDT(16:00 UTC from 2nd week of March - 1st week of November, 17:00 UTC from 2nd week of November - 1st week of March), to review any new issues of interest to the SIG. Look for the meeting titled SIG-Core Issue Triage and see the Triage Guide for more details of what happens at this meeting.
  • Subscribe to the SIG-Core mailing list, which is used to send out information on the next monthly meeting as well as the meeting agenda.

Who can attend or participate?

O3DE cannot work without the help and input from as many of its community members as possible. You do not need anyone’s permission to get involved and contribute to the project. The #sig-core channel on O3DE Discord is a great place to begin getting involved. Many of our community members regularly share ideas, updates, and resources there. You can also find a number of topics under the GitHub Discussions area which need your input.

Helpful information

Some of this guide was influenced by the SIG-Network Readme. Thanks to the contributors there.

sig-core's People

Contributors

amzn-liv avatar amzn-nggieber avatar amzn-pratikpa avatar burelc-amzn avatar c5h11oh avatar hultonha avatar lemonade-dm avatar mcphedar avatar nick-l-o3de avatar obwando avatar rgba16f avatar tjmichaels avatar vincent6767 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sig-core's Issues

SIG Reviewer/Maintainer Nomination

Nomination Guidelines

Reviewer Nomination Requirements

  • 6+ contributions successfully submitted to O3DE
  • 100+ lines of code changed across all contributions submitted to O3DE
  • 2+ O3DE Reviewers or Maintainers that support promotion from Contributor to Reviewer
  • Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Maintainer Nomination Requirements

  • Has been a Reviewer for 2+ months
  • 8+ reviewed Pull Requests in the previous 2 months
  • 200+ lines of code changed across all reviewed Pull Request
  • 2+ O3DE Maintainers that support the promotion from Reviewer to Maintainer
  • Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Reviewer/Maintainer Nomination

Fill out the template below including nominee GitHub user name, desired role and personal GitHub profile

I would like to nominate @nemerle, to become a Reviewer on behalf of sig-core. I verify that they have fulfilled the prerequisites for this role.

https://github.com/o3de/o3de/commits?author=nemerle

Reviewers & Maintainers that support this nomination should comment in this issue.

RFC: Add new log macros to replace AZTracePrintf

Summary:

Add new AZ_Info and AZ_Trace log macros to enable separation of log levels for trace level logging; these macros will replace the use of AZ_TracePrintf, which will be deprecated.

What is the relevance of this feature?

O3DE lacks separation of info level log statements from debug trace.

O3DE has the notion of the following log levels, provided by ILogger - the AZ interface for logging, Debug/Trace debug log macros and ILog legacy log reporting from CrySystem code, for reporting assert, errors, warnings, and informational messages:

Level Debug/Trace ILogger ILog
Trace - AZLOG_TRACE -
Debug - AZLOG_DEBUG -
Info AZ_TracePrintf (*) AZLOG_INFO Log?
Notice - AZLOG_NOTICE LogAlways
Warning AZ_Warn, AZ_WarnOnce AZLOG_WARN LogWarning
Error AZ_Error, AZ_ErrorOnce AZLOG_ERROR LogError
Fatal AZ_Assert(**) AZLOG_FATAL -
  • AZ_Assert is a special case which halts execution and is compiled out in release.

For reference, looked at other common log platforms or approaches:

Level Description Python Java (log4j) glog Unreal
trace Designates finer-grained informational events than the DEBUG. - TRACE - VeryVerbose
debug Designates fine-grained informational events that are most useful to debug an application. debug DEBUG - Verbose
info Designates informational messages that highlight the progress of the application at coarse-grained level. info INFO INFO Display
warning Designates potentially harmful situations. warning WARN WARNING Warning
error Designates error events that might still allow the application to continue running. error ERROR ERROR Error
critical Designates a server error that may lead application to abort or to to continue running properly. critical - - -
fatal Designates very severe error events that will presumably lead the application to abort. - Fatal FATAL Fatal

A lot of existing O3DE code uses the AZ_TracePrintf which mixes informative messages, with often fine-grained informational events that are most useful to debug an application. This RFC proposes splitting that macro up to have a clean separation for Debug/Trace users.

Feature design description:

  • Add a new Trace level to LogLevel
    • Note: RFC does not propose in having debug macro but that could be added later if of value.
  • Mark AZ_TracePrintf as deprecated in code.
  • Add new AZ_Trace macro to log at Trace level.
  • Make new AZ_Info macro that logs at Info level.
  • Set default log level for O3DE Applications to be Info in both systems.
  • Inform code owners of change and encourage migration to new macros from AZ_TracePrintf.
  • After some period of time, replace all usage of AZ_TracePrintf with AZ_Trace.

Technical design description:

As RFC is proposing two new AZ_Info macros, all the technical description has been added that's relevant.

Note: There is a proposal to add UX for log level control in the Editor, See https://github.com/o3de/sig-content/blob/main/rfcs/rfc-69-logging-event-viewer.md

What are the advantages of the feature?

  • Separate info level logging from trace/debug logging to separate log statements intent cleanly for systems that rely on Debug/Trace macros.
  • Users can use existing settings to tune log levels to their needs.
  • Avoids breaking existing code.
  • Code builders can easily separate messages which are informative about the state of their code/application and messages that provide extra details that may be useful for debugging

What are the disadvantages of the feature?

  • Code builders may see some of their log statements disappear and will need to take action to restore any true info level statements.
  • Does not unify Debug/Trace, ILogger and ILog, still confusing as what should be used
  • Does not address other log methods, which include GetErrorReport(), QMessageBox::critical, CLogFile

How will this be implemented or integrated into the O3DE environment?

Are there any alternatives to this feature?

  • Could unify Debug/Trace and ILogger macros, so O3DE has a consistent logging/filtering experience.
  • Provide a new unified logger

However, this RFC could be a stepping stone on those paths as it provides an immediate solution. It does not block a more extensive logger redesign.

How will users learn this feature?

  • Updated docs to cover log levels, what they mean. UX/Docs team would own setting best practices, which I believe is planned.
  • Posts to #sig-all
  • Impactful change notifications

Are there any open questions?

  • Time period to encourage migration to new macros is TBD. Look to SIG/core for guidance here.

Proposed SIG-Core meeting agenda for 2022-09-02

Meeting Details

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

  • Nomination of @lumberyard-employee-dm to chair of sig-core

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

SIG Reviewer/Maintainer Nomination: amzn-mike

Nomination Guidelines

Reviewer Nomination Requirements

  • 6+ contributions successfully submitted to O3DE
  • 100+ lines of code changed across all contributions submitted to O3DE
  • 2+ O3DE Reviewers or Maintainers that support promotion from Contributor to Reviewer
  • Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Maintainer Nomination Requirements

  • Has been a Reviewer for 2+ months
  • 8+ reviewed Pull Requests in the previous 2 months
  • 200+ lines of code changed across all reviewed Pull Request
  • 2+ O3DE Maintainers that support the promotion from Reviewer to Maintainer
  • Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Reviewer/Maintainer Nomination

Fill out the template below including nominee GitHub user name, desired role and personal GitHub profile

I would like to nominate: amzn-mike, to become a Maintainer on behalf of sig-core. I verify that they have fulfilled the prerequisites for this role.

Reviewers & Maintainers that support this nomination should comment in this issue.

RFC: Initial Project setup Asset Processor efficiency improvements

Summary:

When using O3DE with a new project, the first time the Asset Processor is launched, it runs jobs over all source assets located in the each active Gem scan folder, the entire project folder and the engines Assets directory.

Because a project tends to use many of the gems that come with the O3DE engine, there source assets for Gems are processed multiple times for each project. Furthermore the source assets that are part of the engine "Assets" directory are also processed once for each project at user has on their machine

It would be useful for all users of O3DE to be able to reduce or alleviate the time taken by the Asset Processor when launching a project for the first time.

What is the relevance of this feature?

This is important as there have been customer complaints about the amount of time it takes for first time startup when using O3DE.

The implementation of No-Code project RFC has drastically reduce the amount of time required to launch the engine applications from an SDK layout installed from the O3DE installer, but their is still a large time sync that is need for processing thousands of assets the first the time the Editor or the projects game launcher is launched.

Feature design description:

How the O3DE works in relation to using a project in the Editor or GameLauncher, requires that a set of critical assets are processed to before a user is allowed to interact (shaders, fonts, render passes, etc...) with the engine.

The Asset Processor is responsible for aggregating source assets provided by the active project, any Gems the project has active and the engine and process all them into an asset cache directory located at in the project root "Cache" folder on a per platform basis.

This processing can take upwards of 10s of minutes the first time the Editor is launched with the active project on a user machine before the user can use O3DE.

The following are several ideas to reduce the need to process source assets or to prevent duplicate processing of the same source assets.

Per Asset Scan Folder pre-populated cache

One idea is to allow the engine and Gems to provide a pre-populated cache of asset products, that can be re-used by the project without needing to process the Assets through the AP

Add a shared/common Asset "platform" for products

The processing of the large majority of source assets to products results in the same product output when processed for a specific OS platform. For example many JSON, XML and text format source files that are processed produces the same product output when the Asset Processing is producing a product for Windows versus a product for Android, Linux, MacOS, etc...
Currently the Asset Processor would have a job for each platform that process.

Add an option to evaluate the priority of source assets associated with user selected level

This option can be helpful when a user want to load or launch a specific level in the Editor Game Launcher or to produce an Asset Bundle for the level.
As the Asset Processor attempt to build all source assets that the Project, active Gems and the engine has available for use.
There are times when several thousand Assets that never will be used are built on startup causing a large spike in CPU time

Audit and prune the list of Source Assets marked critical

There are many assets in code that are marked critical to the engine startup that are either not critical or could be replaced with a placeholder asset until it is processed.
The Atom RPI uses many utility functions that forces a synchronous compile of the asset in the AP via it's TryToCompileAsset API.
A lot of the Atom Shaders and Streaming Images are marked as critical and auditing whether some of those assets are truly needed on startup can be done.

Improve algorithm for determining how many builder jobs the Asset Processor should kick off at once.

By default the number of builder processes the Asset Processor connects to in the background is controlled by the minJobs and maxJobs setting in the AssetProcessorPlatformConfig.setreg file.
By default the Asset processor launches builder processes up to the "logical core count - 1".
There are several problems with that approach. The first of which is that it doesn't take into account the ratio of RAM to core count. A machine with 8 cores and 8 GiB of ram would launch 7 AssetBuilder jobs which can take anywhere from ~512MiB-4 GiB of RAM for processing depending on the type of source asset. Textures and FBX processing uses more RAM, than prefab and XML processing.
This leads to scenarios where the AP is using up CPU of a machine, while potentially using Swap memory to builders to still run even when RAM is exhausted.
The Asset Processor doesn't constantly take into account current available memory usage/core usage at the time.
If a process such as the O3DE Editor or a Web Browser is using percentage of the total memory, it doesn't account for the remaining available memory when jobs. A machine might have 8 cores and 16 GiB of Ram, but half of the RAM is used by other processes not available to the Asset Processor at the time.

It may be useful to provide configuration settings to user to control how many Asset Builder processes are launched as well as separately how many Job are run at once, based on user provided heuristic settings.
Settings such as expected max amount of RAM used per job would be useful(such as ExpectedMaxRamPerJob or ExpectedAverageRamPerJob.
Also a setting that allows a user to provide a cap on total number jobs created based on a maximum RAM threshold or available RAM could be useful.
All of this could help prevent thrashing while running the Asset Processor and improve machine responsiveness.

SIG Reviewer/Maintainer Nomination: amzn-sj

Nomination Guidelines

Reviewer Nomination Requirements

  • 6+ contributions successfully submitted to O3DE
  • 100+ lines of code changed across all contributions submitted to O3DE
  • 2+ O3DE Reviewers or Maintainers that support promotion from Contributor to Reviewer
  • Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Maintainer Nomination Requirements

  • Has been a Reviewer for 2+ months
  • 8+ reviewed Pull Requests in the previous 2 months
  • 200+ lines of code changed across all reviewed Pull Request
  • 2+ O3DE Maintainers that support the promotion from Reviewer to Maintainer
  • Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Reviewer/Maintainer Nomination

Fill out the template below including nominee GitHub user name, desired role and personal GitHub profile

I would like to nominate: @amzn-sj, to become a Maintainer on behalf of sig-core. I verify that they have fulfilled the prerequisites for this role.

Reviewers & Maintainers that support this nomination should comment in this issue.

RFC: Job Dependencies on Products

Summary:

Job dependencies will be expanded to allow Builders to provide a list of products they specifically depend on. Asset Processor will also record the hash of every product as it's created. This will allow Asset Processor to only queue the dependency when the relevant products have actually changed.

What is the relevance of this feature?

Without this, long dependency chains can cause large numbers of files to run through CreateJobs, taking considerable time. This will reduce the number of CreateJobs calls to the ones which actually need to do work. All of the builders involved in the process of compiling shaders/materials are the main use case, but any builder with Job Dependencies can make use of the feature.

Feature design description:

AssetBuilders will be able to include a list of SubIds when declaring a Job Dependency. Asset Processor will store this list in the Asset Database. Whenever a file changes, Asset Processor will look up the list of direct dependencies and will only immediately queue those which do not include any SubIds. After a ProcessJob call is finished and we've collected the output products, AP will hash each product and compare the hashes with those stored in the database to get a list of updated products. AP will again look up the list of direct dependencies and this time will queue up any which list SubIds that are included in the list of updated products.

As a brief overview of the existing types of dependencies, AP currently supports 2 types of build-time dependencies: Source Dependencies and Job Dependencies, as well as 1 type of run-time dependency: Product Dependencies.

In the diagrams below, "No regular dependencies found" refers to Source Dependencies or Job Dependencies which do not make use of this feature (meaning no SubId was specified). "Find dependents on me" means the AP is searching for all files which depend on the source file, in this case, StandardPBR.materialtype

Example flows (not representative of current behavior):

Before:
JobDependencies-Before (3)

After:
JobDependenciesAfter

After, meaningful change:
JobDependencies-AfterChange (1)

Technical design description:

AssetBuilderSDK::JobDependency will be updated to have a new field: AZStd::vector<AZ::u32> m_productSubIds. This list will be stored in the asset database SourceDependencies table as a comma-separated list SubIds, as this avoids the need for a new table and it shouldn't be necessary to write queries against individual SubId values, we just need to store/retrieve the value in bulk. Both the serialization and behavior context reflection will need to be updated. This will be available to all builders, including python based builders.

The Products table will also be updated to have a new Hash column. The hash of the product will be computed during AssetProcessed_Impl when preparing to store the products to the database.

GetSourceFilesWhichDependOnSourceFile will be updated to take a list of SubIds. If the list is empty, it will only return entries from the SourceDependencies table which have an empty SubIds column. This will result in the original call to this function (during CheckSource) returning only dependencies which do not care about a specific product. By default this will result in the same behavior as before. Builders which are not updated to provide the new info will return an empty list and still run in the exact same manner as before.

In AssetProcessed_Impl, after analyzing the list of products a job just produced, the list of prior products will be subtracted from the list of new products to produce a list of UpdatedProducts (the file hash will be the determining factor of difference). The resulting list of UpdatedProducts will be provided to GetSourceFilesWhichDependOnSourceFile which will return any direct dependencies with a non-empty list of SubIds where at least 1 of the SubIds is in the list of UpdatedProducts. All of these dependencies will then be put in the processing queue.

What are the advantages of the feature?

  • Trivial changes that result in the same product output ideally results in no dependencies being queued up.
  • When only some of the outputs have changed, only a subset of the dependencies may need to be reprocessed.
  • Implementation is fairly simple.
  • Usage by builders is optional and can be as simple as a 1-line addition.

What are the disadvantages of the feature?

There shouldn't be any direct disadvantages to using the feature.

How will this be implemented or integrated into the O3DE environment?

  • The Asset Database queries will need to be updated to add the Hash and SubIds columns.
  • AssetBuilderSDK needs to be updated to add the new field for storing the SubId list.
  • AssetBuilderManager needs to be updated to hash the products, generate the list of updated products, find relevant dependencies and queue them up.

Are there any alternatives to this feature?

The originally proposed solution to this problem was to allow the creation of "intermediate" jobs in a way that would allow jobs to create other jobs in a chain. Intermediate jobs is a more powerful feature which can also be used here but it is a more complex feature. This proposal allows a much simpler solution which allows builders a 1-line change for cases where all that is needed is a dependency on a specific product. Additionally, this does not prevent the inclusion of such a feature in the future.

How will users learn this feature?

  • Asset Builder documentation will need to be updated to explain this feature.
  • Usage is as simple as calling m_productSubIds.push_back(subId) on a JobDependency in the builder's CreateJobs function

Are there any open questions?

  • Do we need to support wildcards? There may be a use case where a builder does not know all of the products it depends on. The downside to supporting this is typically a much more complicated implementation with more significant performance penalties.

Proposed RFC Feature: JSON $import Support

Summary:

The goal is to add support for an $import directive in our JSON serializer so that JSON files can import other json files thereby reducing the need for a lot of duplication in our JSON files.

Feature design description:

Import Formats

The following two import formats will be supported:

"A" : {"$import" : "some.json"}

"A" : {"$import": { "file": "some.json", "patch": [...] } }

Handling the $import directive

ImportResolver

The JsonImportResolver classes provides the following static functions:

ResolveImports: This will recursively parse the loaded json document and for each $import directive it encounters, it will call the ResolveImport function of the importer class. This will recursively parse nested imports too.
RestoreImports: This will go through the list of $import directives in the top level json and call the RestoreImport function of the importer class for each one.
This section focuses on importing objects.

BaseImporterObject

We will have a BaseImporterObject class. This class will have the following functions:

ResolveImport: The default implementation of this function will read the JSON file being imported from disk and placed in a JSON object. If there are any patches included with the import, the patch will be placed in a patch object.
ApplyPatch: This will be called after all the nested imports are resolved for the imported object. The patch will be applied to the fully resolved imported object.
CreatePatch: This will be called before RestorImport to create a patch.
RestoreImport: The default implementation will create an $import directive with a JSON merge patch if one was provided.
GetImports: This will be used to retrieve a list of paths(JSON pointers) and associated import name(name of the file/value to be imported) for all the imports. This is used by JsonImportResolver's RestoreImports to retrieve the paths of imports to restore.
GetDependencies: This will list all the imported files/values on which the loaded json file depends. This is required for asset dependency reporting.
Users can inherit the BaseImporterObject class and customize the implementations of ResolveImport, RestoreImport, CreatePatch, and ApplyPatch to suit their specific needs(such as handling custom patch types).

The following sections describe how resolving and restoring will be handled for the $import directive:

ResolveImport

This will be called for each import(including nested imports). The implementation of this function should load the value for the file being imported, copy any patches if available and return the resolved and unpatched value.

ApplyPatch

This will apply the patch to imported value once all the nested imports have been resolved. The default implementation uses json merge patch.

CreatePatch

This will create a patch if required. The default implementation uses json merge patch.

RestoreImport

This will be called for each import in the top level file(i.e., the file being resolved/restored). This will not be called for nested files because if A.json imports B.json which in turn imports C.json, then any changes to the value imported from C should be patched in A.json's import directive. The intermediate import B.json shouldn't need to be restored.

Handling Arrays

Arrays are treated similar to objects for the most part. So if the imported file has an array at its root, it will be imported as a whole into the importing object. Partial arrays are not supported because it's not easy to maintain the order of the imported elements. rapidjson doesn't have an insert function to insert into arrays(not that I know of anyway). So we would have to pop all elements after the import, push all the elements from the import into the array, and then finally push the elements that were there after the import. This is a lot of effort. More simply,
This is supported:

{
    "name" : "valid_array_import",
    "object": {"$import" : "array.json"}
}

This is not supported:

{
    "name" : "invalid_array_import",
    "object": [
        {"$import" : "array.json"},
        {"elem1" : "val1"},
        {"elem2" : "val2"}
    ]
}

In the above example, array.json has an array at its root:

[
    { "element_1" : "value_1" },
    { "element_2" : "value_2" },
    { "element_3" : "value_3" }
]

RFC: Update the Gem Templates to support a Public/Private API split

Summary:

The O3DE Template system provides several gem templates that users instantiate in order to start iteration on a new gem.
Those Gems templates need expose several CMake targets that allows a gem to build it's code into a library that is loaded by O3DE applications as a "plugin", as well as targets for sharing a gem public API with outside dependencies(other gems and libraries) as well as private targets for sharing code among the gem "plugin" and test targets.

Currently as it stands , the gem templates exposes a ${GemName}.Static library target whose purpose is to share source files and include directories among the gem plugin targets. It is meant to be a private target not used outside of gem.
Additionally the gem templates exposed a ${GemName} and ${GemName}.Editor target that are a MODULE library type in non-monolithic builds and a static library in monolithic builds.
CMake enforces that MODULE library targets can't be linked into other targets can only be loaded dynamically using dlopen functionality. See CMake add_library documentation for more details.

Since monolithic builds are meant for release of a game or simulation application, all the shared libraries targets are converted to the static library targets and linked into the final application in order to prevent shared library dependency issues, reduce the size of the overall binary folder and to allow the linker to perform de-duplication and optimizations over all the code.

What is the motivation for this suggestion?

Why is this important?
Having a convention for separating a private vs public API within the gem templates allow authors of gems to enforce boundaries as to what functionality a dependents libraries should use and what functionality is internal functionality that is subject to change among versions.
Furthermore a public API convention allows users of gem to learn a recommended approach for using other how to depend on the code of other gems

What are the use cases for this suggestion?
The primary use case is for dependent gems and other libraries to be able to know which targets they can depend on as part of the public API and therefore have an expectation of gem functionality that is meant to be stable.
This also empowers authors of gem to separate their internal implementation from their public exposed API, which allows them to set user expectations of updates they make to their gems.
Gem authors would be allowed to change their internal implementation with little push back as public users wouldn't depend on that implementation. When it comes to updating the public API gem authors have to be cognizant of changes they make as to not break other gems using that public API.
This creates an unspoken contract between the gem maintainer and the gem user, that the gem maintainer is allowed to change the internal implementation in anyway they would like without external insight, while any public API changes can and should receive scrutiny from users of that gem

What should the outcome be if this suggestion is implemented?
The following Gem Templates should be updated as of 2022-04-26
CppToolGem
DefaultGem
PythonToolGem

Suggested design description:

The first update to those gem would be to add a new ${GemName}.API and ${GemName}.Editor.API INTERFACE target that exposes the public API of the gem.
The those targets are meant to transfer over the interface include directories, compile definitions and options, and build dependencies that any outside gem/library using that gem needs to depend on.

By default the following gem variant API aliases will be added to facilitate users depending on the Gem API of variants instead of the runtime and editor API targets.
Here is the list of new variant API aliases

  1. ${GemName}.Clients.API
  2. ${GemName}.Servers.API
  3. ${GemName}.Tools.API
  4. ${GemName}.Builders.API

The next update is to create Gem variant aliases for the API targets. This allows a gem to expose a specific API based on the Gem variants that it is used with.
For example a gem could expose a different API to the AssetProcessor than the Editor by making a separate ${GemName}.Builders.API INTERFACE target vs ${GemName}.Editor.API INTERFACE target.

Next the ${GemName}.Static STATIC library target should change to an OBJECT target be renamed to a private ${GemName}.Private.Object as part of the Gem template(this also applies to the ${GemName}.Editor.Static being renamed to ${GemName}.Editor.Private.Object).

Why switch the STATIC library over to a OBJECT target?

An OBJECT target in CMake allow developers to compile source files into a collection of object files without creating a static or shared library. More information on an OBJECT target can be found here.
This has the major benefit that unnecessary copies of object files aren't stored in a static library, thereby saving disk space.
Furthermore this de-clutters the SDK lib directory as it would only contain public static libraries that aren't meant to be used by other targets.

An example where this helps is with the LyShine.Static library target which should be private to the LyShine Gem. In a build of the source engine, there would be two copies of the object files that get build for it, one on disk and one inside of the "LyShine.Static.a" archive file, but if the target is an OBJECT target, then there would be no "LyShine.Static.a" file.
Because there is no "LyShine.Static.a" file, when creating an SDK layout, there is no copying of that private library to the SDK

What are the advantages of the suggestion?

The gem maintainer, has a framework in which to expose public functionality, while reserving a section of their gem from making changes to the implementation without worrying that they are breaking external users.
This make it easier to implement a version scheme for code changes within a gem. Perhaps API breaking changes are major version bumps, while internal implementation changes are minor changes

Users of newer gems will now have a known target name they can depend on in their libraries to access the functionality of the gem.

What are the disadvantages of the suggestion?

  1. The disadvantage is that existing gems will not have the new API targets.
    The expectation for users will be that to add a dependency on a gem API, they would only need to a dependency on ${GemName}.<variant>.API target.
    User could be surprised when coming across gems that don't use the convention. This creates a bit of an inconsistency between older gems and newer gems.

  2. Using the updated Gem Template would result in 1 to 2 more TARGETS being created out of the box by default.
    If using an IDE such as Visual Studio with a Gem created from the Default Gem template, this will result in 8 .vcxproj being loaded instead of 6. This is balanced with the .Static vcxproj is being split into a smaller .API and .Private.Object vcxproj targets which loads faster.

How will this be work within the O3DE project?

This should be seamless. Users would use the same o3de.py create-gem command to create their gems with the new public API and private object targets.

Are there any alternatives to this suggestion?

There is a feature request with CMake to allow specification of DIRECTORY scope for regular CMake Targets: https://gitlab.kitware.com/cmake/cmake/-/issues/23433

That would allow enforcement through cmake other gems couldn't depend on the private ${GemName}.Static targets making it a lot easier to make sure an internal library isn't used outside of the Gem's root directory.

What is the strategy for adoption?

The first step is to update the Gem Templates to expose the new .API INTERFACE targets with the public include directories, compile option and build dependencies.

The next step is to update existing gems with the O3DE engine repo: https://github.com/o3de/o3de/tree/development/Gems with gem variant aliases for ${GemName}.<variant>.API targets.
Ideally the existing ${GemName}.Static targets inside of gems that aren't a dependency of another gem are changed into OBJECT targets and renamed ${GemName}.Private.Object.

Gems with the engine which are depending on other gems ${GemName}.Static target should be updated to instead depend on the ${GemName}.API target. The ${GemName}.API target for the gem that is being depended on can be an alias of ${GemName}.Static in this case.

Example: New Gem within .API targets

The following is an example of a TestGem created using the new Public/Private API split

# Currently we are in the Code folder: ${CMAKE_CURRENT_LIST_DIR}
# Get the platform specific folder ${pal_dir} for the current folder: ${CMAKE_CURRENT_LIST_DIR}/Platform/${PAL_PLATFORM_NAME}
# Note: o3de_pal_dir will take care of the details for us, as this may be a restricted platform
#       in which case it will see if that platform is present here or in the restricted folder.
#       i.e. It could here in our gem : Gems/TestGem/Code/Platform/<platorm_name>  or
#            <restricted_folder>/<platform_name>/Gems/TestGem/Code
o3de_pal_dir(pal_dir ${CMAKE_CURRENT_LIST_DIR}/Platform/${PAL_PLATFORM_NAME} "${gem_restricted_path}" "${gem_path}" "${gem_parent_relative_path}")

# Now that we have the platform abstraction layer (PAL) folder for this folder, thats where we will find the
# traits for this platform. Traits for a platform are defines for things like whether or not something in this gem
# is supported by this platform.
include(${pal_dir}/PAL_${PAL_PLATFORM_NAME_LOWERCASE}.cmake)

# The TestGem.API target declares the common interface that users of this gem should depend on in their targets
ly_add_target(
    NAME TestGem.API INTERFACE
    NAMESPACE Gem
    FILES_CMAKE
        testgem_api_files.cmake
        ${pal_dir}/testgem_api_files.cmake
    INCLUDE_DIRECTORIES
        INTERFACE
            Include
    BUILD_DEPENDENCIES
        INTERFACE
           AZ::AzCore
)

# Add the TestGem.Private.Object target
# Note: We include the common files and the platform specific files which are set in
# 1.testgem_private_files.cmake
# 2.${pal_dir}/testgem_private_files.cmake
ly_add_target(
    NAME TestGem.Private.Object OBJECT
    NAMESPACE Gem
    FILES_CMAKE
        testgem_private_files.cmake
        ${pal_dir}/testgem_private_files.cmake
    INCLUDE_DIRECTORIES
        PRIVATE
            Include
            Source
    BUILD_DEPENDENCIES
        PUBLIC
            AZ::AzCore
            AZ::AzFramework
)

# Here add TestGem target, it depends on the Private Object library and Public API interface
ly_add_target(
    NAME TestGem ${PAL_TRAIT_MONOLITHIC_DRIVEN_MODULE_TYPE}
    NAMESPACE Gem
    FILES_CMAKE
        testgem_shared_files.cmake
        ${pal_dir}/testgem_shared_files.cmake
    INCLUDE_DIRECTORIES
        PUBLIC
            Include
        PRIVATE
            Source
    BUILD_DEPENDENCIES
        PUBLIC
            Gem::TestGem.API
        PRIVATE
            Gem::TestGem.Private.Object
)

# By default, we will specify that the above target TestGem would be used by
# Client and Server type targets when this gem is enabled.  If you don't want it
# active in Clients or Servers by default, delete one of both of the following lines:
ly_create_alias(NAME TestGem.Clients NAMESPACE Gem TARGETS Gem::TestGem)
ly_create_alias(NAME TestGem.Servers NAMESPACE Gem TARGETS Gem::TestGem)

# For the Client and Server variants of TestGem Gem, an alias to the TestGem.API target will be made
ly_create_alias(NAME TestGem.Clients.API NAMESPACE Gem TARGETS Gem::TestGem.API)
ly_create_alias(NAME TestGem.Servers.API NAMESPACE Gem TARGETS Gem::TestGem.API)

# If we are on a host platform, we want to add the host tools targets like the TestGem.Editor MODULE target
if(PAL_TRAIT_BUILD_HOST_TOOLS)
    # The TestGem.Editor.API target can be used by other gems that want to interact with the TestGem.Editor module
    ly_add_target(
        NAME TestGem.Editor.API INTERFACE
        NAMESPACE Gem
        FILES_CMAKE
            testgem_editor_api_files.cmake
            ${pal_dir}/testgem_editor_api_files.cmake
        INCLUDE_DIRECTORIES
            INTERFACE
                Include
        BUILD_DEPENDENCIES
            INTERFACE
                AZ::AzToolsFramework
    )

    # The TestGem.Editor.Private.Object target is an internal target
    # which is only to be used by this Gem's CMakeLists.txt and any Subdirectories
    # Other Gems should not use this target
    ly_add_target(
        NAME TestGem.Editor.Private.Object OBJECT
        NAMESPACE Gem
        FILES_CMAKE
            testgem_editor_private_files.cmake
        INCLUDE_DIRECTORIES
            PRIVATE
                Include
                Source
        BUILD_DEPENDENCIES
            PUBLIC
                AZ::AzToolsFramework
                $<TARGET_OBJECTS:Gem::TestGem.Private.Object>
    )

    ly_add_target(
        NAME TestGem.Editor GEM_MODULE
        NAMESPACE Gem
        AUTOMOC
        FILES_CMAKE
            testgem_editor_shared_files.cmake
        INCLUDE_DIRECTORIES
            PRIVATE
                Source
            PUBLIC
                Include
        BUILD_DEPENDENCIES
            PUBLIC
                Gem::TestGem.Editor.API
            PRIVATE
                Gem::TestGem.Editor.Private.Object
    )

    # By default, we will specify that the above target TestGem would be used by
    # Tool and Builder type targets when this gem is enabled.  If you don't want it
    # active in Tools or Builders by default, delete one of both of the following lines:
    ly_create_alias(NAME TestGem.Tools    NAMESPACE Gem TARGETS Gem::TestGem.Editor)
    ly_create_alias(NAME TestGem.Builders NAMESPACE Gem TARGETS Gem::TestGem.Editor)

    # For the Tools and Builders variants of TestGem Gem, an alias to the TestGem.Editor API target will be made
    ly_create_alias(NAME TestGem.Tools.API NAMESPACE Gem TARGETS Gem::TestGem.Editor.API)
    ly_create_alias(NAME TestGem.Builders.API NAMESPACE Gem TARGETS Gem::TestGem.Editor.API)

endif()

################################################################################
# Tests
################################################################################
# See if globally, tests are supported
if(PAL_TRAIT_BUILD_TESTS_SUPPORTED)
    # We globally support tests, see if we support tests on this platform for TestGem.Tests
    if(PAL_TRAIT_TESTGEM_TEST_SUPPORTED)
        # We support TestGem.Tests on this platform, add dependency on the Private Object target
        ly_add_target(
            NAME TestGem.Tests ${PAL_TRAIT_TEST_TARGET_TYPE}
            NAMESPACE Gem
            FILES_CMAKE
                testgem_tests_files.cmake
            INCLUDE_DIRECTORIES
                PRIVATE
                    Tests
                    Source
            BUILD_DEPENDENCIES
                PRIVATE
                    AZ::AzTest
                    AZ::AzFramework
                    Gem::TestGem.Private.Object
        )

        # Add TestGem.Tests to googletest
        ly_add_googletest(
            NAME Gem::TestGem.Tests
        )
    endif()

    # If we are a host platform we want to add tools test like editor tests here
    if(PAL_TRAIT_BUILD_HOST_TOOLS)
        # We are a host platform, see if Editor tests are supported on this platform
        if(PAL_TRAIT_TESTGEM_EDITOR_TEST_SUPPORTED)
            # We support TestGem.Editor.Tests on this platform, add TestGem.Editor.Tests target which depends on
            # private TestGem.Editor.Private.Object target
            ly_add_target(
                NAME TestGem.Editor.Tests ${PAL_TRAIT_TEST_TARGET_TYPE}
                NAMESPACE Gem
                FILES_CMAKE
                    testgem_editor_tests_files.cmake
                INCLUDE_DIRECTORIES
                    PRIVATE
                        Tests
                        Source
                BUILD_DEPENDENCIES
                    PRIVATE
                        AZ::AzTest
                        Gem::TestGem.Private.Object
            )

            # Add TestGem.Editor.Tests to googletest
            ly_add_googletest(
                NAME Gem::TestGem.Editor.Tests
            )
        endif()
    endif()
endif()

If using the CMake Tools extension for VSCode, this is how a new Gem Targets would look
image

#Example: Existing Gem using an API target
The following is a contrived example of the NvCloth gem using the API target of the TestGem from the example above.
The gem maintainer of the NvCloth gem would add a build dependency on the TestGem.${API target for the runtime API target.

ly_add_target(
    NAME NvCloth.API INTERFACE
    NAMESPACE Gem
    FILES_CMAKE
        nvcloth_api_files.cmake
    INCLUDE_DIRECTORIES
        INTERFACE
            Include
    BUILD_DEPENDENCIES
        INTERFACE
            3rdParty::NvCloth
            AZ::AzFramework
            Gem::AtomLyIntegration_CommonFeatures.API
            Gem::TestGem.API
            Gem::EMotionFX.API
)

Other Concerns

The O3DE repo currently contains 80 top level gems with an AutomatedTesting project which contains a single gem.
The Atom Gem contains several sub gems(Atom_RHI, Atom_RPI, etc...) as well which pushes the total number of gems to over 100.
The list of targets in gems vary based on the needs of that gem what it wishes to expose. Some gems have 0 targets others have 2~3 targets based on if Testing is enabled, while others might have 9+ targets that will show up in an IDE such as Visual Studio.

Gems are plugins, that will often have disjoint authors, that should not need to worry about any issues with a particular when authoring their gems.

For example a gem author who creates a Particle system gem that has 10 targets and host it as part of an external repo, does not need to worry if their gem can load within the engine's build solution, as only gems registered with the the current project and the engine have their CMakeLists.txt visited.

But someone adding a new gem within the engine repo, will have that gem's targets be part of the generated build solution for whatever generator was used(Ninja, Xcode, Unix Makefiles, etc...).

A subset of gems registered within the engine and resides within the O3DE repo should be moved to another repo, since they are not core to the engine itself.

The following is a list of gems that might not be core to the engine itself can be moved to another repo

Gem List

Achievements
AssetValidation
AtomContent
AtomTressFX
AudioEngineWwise
AWSClientAuth
AWSCore
AWSGameLift
AWSMetrics
BarrierInput
Blast
CertificateManager
CustomAssetExample
DevTextures
ExpressionEvaluation
GameState
GameStateSamples
Gestures
HttpRequestor
InAppPurchases
LocalUser
LyShineExamples
MessagePopup
Metastream
Microphone
MotionMatching
NvCloth
PhysXDebug
Presence
PrimitiveAssets
SaveData
SceneLoggingExample
ScriptCanvasDeveloper
ScriptCanvasPhysics
ScriptCanvasTesting
ScriptedEntityTweener
ScriptEvents
SliceFavorites
Stars
StartingPointCamera
StartingPointInput
StartingPointMovement
TestAssetBuilder
TickBusOrderViewer
Twitch
UiBasics
VideoPlaybackFramework
VirtualGamepad

That would leave the engine repo with set of "core" gems

Atom
AtomLyIntegration
AudioSystem
Camera
CameraFramework
CrashReporting
DebugDraw
EditorPythonBindings
EMotionFX
FastNoise
GradientSignal
GraphCanvas
GraphModel
ImGui
LandscapeCanvas
LmbrCentral
LyShine
Maestro
Multiplayer
PhysX
Prefab
Profiler
PythonAssetBuilder
QtForPython
SceneProcessing
ScriptCanvas
SurfaceData
Terrain
TextureAtlas
Vegetation
WhiteBox

That would reduce the number of gems that come with the engine from 80 to 32.

Proposed SIG-Core meeting agenda for 27/JAN/2022 0900 PST/1700 GMT

Meeting Details

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

Meeting Agenda

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Proposed RFC Feature Create JSON Schema for Prefabs

Summary:

Currently there is a Prefab system for objects, but no documentation or schema published.

What is the relevance of this feature?

Proper documentation is necessary to allow the feature to expand and mature.
There are many needs for the schema to be published to allow automation of DCC and dynamically generated content.
As the Prefab system expands, having a properly defined and documented schema will allow developers to implement new features in a consistent manner.

Feature design description:

The schema should follow industry standards as defines here: https://json-schema.org/ and possibly be explored further with Docson.

What are the advantages of the feature?

Clear concise documentation on the schema will ensure the feature stays consistent and extensible.

What are the disadvantages of the feature?

None

How will this be implemented or integrated into the O3DE environment?

There is no real integration, however there may be the possibility of automated publication of the schema through code generation.

How will users learn this feature?

They will learn through the developer documentation site.

Proposed SIG-Core meeting agenda for 28-JUN-2021

Meeting Details

  • Date/Time: 28/JUN/2021 @ 1730 UTC / 1330 ET / 1030 PT
  • Location: Discord SIG-Core Voice Room
  • Moderator: Pratik Patel
  • Note Taker Liv Erickson

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

  • Initial discussion was held about the SIG charter. The overall sentiment was that there was too much responsibility assigned to this group and things should be moved to others. We ran out of time before the discussion could be completed.

Meeting Agenda

Discuss agenda from proposed topics

  • Continue discussing the charter and decide which scope and responsibilities should be moved to other SIG's.

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

  • At the end of the meeting we should have a list of items we want to move out of the charter as well as suggestions as to which SIG should own them instead.

Open Discussion Items

List any additional items below!

SIG-Core 11/30 release notes

Please fill any info related to the below for the 11/30 release notes: Note this has a due date of Friday Nov 12th, 2021

Features

Bug fixes (GHI list if possible)

Deprecations

Known issues

Proposed SIG-Core meeting agenda for JUNE-10-2021

Meeting Details

The SIG Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

First Meeting, no new updates

Meeting Agenda

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Proposed SIG-Core meeting agenda for 1-APR-2022 - 0900 PT

Meeting Details

  • Date/Time: 1/APR/2022 @ 0900 PST
  • Location: Discord SIG-Core Voice Room
  • Moderator: @lumberyard-employee-dm
  • Note Taker Volunteer needed!

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

  • Create a new tools communication GEM that leverages AzNetworking to facilitate performant, extensible, and encryptable communication between tools processes @kberg-amzn
  • Update the github CODEOWNERS file with paths of to source that sig-core should automatically review @lumberyard-employee-dm
  • Discuss #31 @burelc-amzn

Discuss agenda from proposed topics

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Proposed SIG-Core meeting agenda for 3/AUG/2022

Meeting Details

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

Meeting Agenda

  • C++20: Why, when and how we should upgrade the code base to C++20
  • How can we reduce the sensitivity of the error messages displayed in the console. @yuyihsu
    Please post additional agenda items in the comments so they can be added to the meeting agenda

SIG-Core Nomination/Election (2023-02-13 to 2023-03-03)

Current Chair: @amzn-pratikpa (discord: AMZN-pratikpa [Amazon])
Current Co-Chair: @lumberyard-employee-dm (discord: geds-dm [Amazon])
Election Official: @amzn-phist (discord: amzn-phist [Amazon] )

Nomination Timeline: February 13th, 2023 - February 24th, 2023
Election Timeline: February 27th, 2023 - March 3rd, 2023

SIG-Core Chair/ Co-Chair roles

The chair and co-chair serve equivalent roles in the governance of the SIG and are only differentiated by title in that the highest vote-getter is the chair and the second-highest is the co-chair. The chair and co-chair are expected to govern together in an effective way and split their responsibilities to make sure that the SIG operates smoothly and has the availability of a chairperson at any time.
Unless distinctly required, the term "chairperson" refers to either/both of the chair and co-chair. If a chair or co-chair is required to perform a specific responsibility for the SIG they will always be addressed by their official role title.
In particular, if both chairpersons would be unavailable during a period of time, the chair is considered to be an on-call position during this period. As the higher vote-getter they theoretically represent more of the community and should perform in that capacity under extenuating circumstances. This means that if there is an emergency requiring immediate action from the SIG, the chair will be called to perform a responsibility.

Responsibilities

The responsibilities of the SIG-Core group are detailed in the SIG-Core Charter: https://github.com/o3de/sig-core/blob/main/governance/SIG%20Core%20Charter.md#scope
SIG-Core weekly Triage is every Wednesday at 9:00 AM PST/ PDT

Nomination Process

Nomination may either be by an O3DE community member or self-nomination. A nominee may withdraw from the election at any time for any reason until the election starts on February 27th, 2023.

Nomination requirements

The only other nomination requirement is that the nominee agrees to be able to perform their required duties and has the availability to do so, taking into account the fact that another chairperson will always be available as a point of contact.

How to nominate

Nominations will be accepted for 2 weeks from February 13th, 2023 12:00AM PST to February 24th, 2023 11:59PM PST.
Nominate somebody (including yourself) by responding to this issue with:

  • A statement that the nominee should be nominated for a chair position in SIG-Core. Nominees are required to provide a statement that they understand the responsibilities and requirements of the role, and promise to faithfully fulfill them and follow all contributor requirements for O3DE.
  • The name under which the nominee should be addressed. Nominees are allowed to contact the election proctor to have this name changed.
  • The GitHub username of the nominee (self-nominations need not include this; it's on your post.)
  • Nominee's Discord username (sorry, but you must be an active Discord user if you are a chairperson.)

Election Process

The election will be conducted for one week from February 27th, 2023 12:00AM PST to March 3rd, 2023 11:59PM PST and held through an online poll. Votes will be anonymous and anyone invested in the direction of O3DE and SIG-Core may vote. If you choose to vote, we ask that you be familiar with the nominees.
If there is a current interim chair, they will announce the results in the Discord SIG channel as well as the SIG O3DE mailing list no later than March 7th, 2023 1:00PM PST. If there is no interim chair, the executive director will announce the results utilizing the same communication channels. At that time if there is a dispute over the result or concern over vote tampering, voting information will be made public to the extent that it can be exported from the polling system and the SIG will conduct an independent audit under the guidance of a higher governing body in the foundation.
The elected chairpersons will begin serving their term on March 8th, 2023 at 9:00AM PST. Tentatively SIG chairs will be elected on a yearly basis.

Other Links

SIG-Core Mailing List: https://lists.o3de.org/g/sig-core/topics
O3DE Calendar: https://lists.o3de.org/g/o3de-calendar/calendar

Proposed RFC Suggestion: SIG-Core Mission Statement

Summary:

In the last SIG meeting, it was proposed that the SIG have a mission statement.

What is the motivation for this suggestion?

A lot of things have been assigned under the stewardship of this SIG, some of which would be better under other SIG's. By forming a mission statement, it would give a clearer indication to current and new members exactly what this group is for and also what it is not for,

Please comment with your suggestion for the mission statement and we will bring it up for discussion at the meetings until we achieve a consensus.

Proposed SIG-Core meeting agenda - 15/OCT/2021 0900PT/1600GMT

Meeting Details

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

Meeting Agenda

Please comment on this for anything you would like added to the agenda

Update SIG-Core README.md

Summary:

Update the o3de/sig-core README to include an overview and summary of the SIG-group responsibilities.
Furthermore information should be added to when meetings are scheduled as well as who the Chair and Co-Chair of the o3de/sig-core group are.

Links to the sig-core governance as well as links to the sig-core Discord channel and list.o3de.org mailing list

What is the relevance of this feature?

This helps user quickly learn what a responsibilities a SIG-Core, where to contact that group and when the group meetings occur

Proposed RFC Feature Quality/Scalability Framework

Summary:

A quality/scalability framework should be created that gives developers the ability to apply settings based on the platform and device their application is running on.

What is the relevance of this feature?

Developers do not have a framework for specifying and automatically selecting settings presets based on hardware, capabilities and user preferences. The legacy system which used CVarGroups has been removed and is no longer supported.

Developers typically use this kind of feature to specify Low/Medium/High/VeryHigh quality settings for target platforms so a game or simulation will perform as intended with the expected quality.

Feature design description:

A quality/scalability framework that satisfies the requirements can be built using CVARs and the Settings Registry.

  1. Settings are defined with CVARs, e.g. a rendering setting for turning on shadows named r_shadows
  2. Groups of settings are defined in the Settings Registry e.g. a group named q_graphics could control all the graphics quality settings.
  3. Rules for which settings to use for specific devices are defined in the Settings Registry
  4. Settings and rules for specific platforms are defined in the Settings Registry using the Platform Abstraction Layer (PAL) folder structure.

Developers will be able to use the initial framework without any graphical tools or Editor menus, but those tools will be part of a future RFC.

Technical design description:

The technical design is mainly comprised of settings groups and levels, device attributes and device settings rules.

Settings groups and levels

Settings groups cvars can be defined (e.g. q_general, q_graphics, q_physics or GraphicsQuality, GeneralQuality etc.) and within each group, quality levels are defined (e.g. Low, Medium, High).

NOTE: the r_ prefix denotes rendering/graphics CVARs and the q_ prefix denotes CVARs or CVAR groups that control quality. These largely follow a pre-existing set of prefixes listed here: https://github.com/o3de/o3de/blob/development/Code/Legacy/CrySystem/ConsoleHelpGen.cpp#L672

Settings groups and quality levels are defined in .setreg files at the key /O3DE/Quality/Groups/\<group\>/Levels

File: O3DE/Registry/quality.setreg

{
  "O3DE":{
    "Quality":{
       "DefaultGroup":"q_general",  // default/fallback quality group
       "Groups":{
         "q_general": {
           "Description":"General quality group.  0 : Low, 1 : Medium, 2 : High",
           "Levels":[ 
            "Low",         // level 0 (based on array index)
            "Medium",  // level 1
            "High"        // level 2
          ],
          "Default": "High", // default level, can also be index if preferred
          "Settings": {} // Settings could go in here, but it's more likely those will be in Gems
        }
      }
    }
  }
}

Gems like Atom would define new settings groups and levels as needed.

File: O3DE/Gems/Atom/Registry/quality.setreg

{
  "O3DE": {
    "Quality": {
      "Groups":{
        "q_general":{
          "Settings":{
            "q_graphics":[0,1,2,2]  // q_graphics has one more level than q_general
          }
        },
        "q_graphics": { // General Graphics settings that uses shadow and visibility quality group settings
          "Description":"Graphics quality group. 0 : Low, 1 : Medium, 2 : High, 3 : VeryHigh",
          "Levels": [ "Low", "Medium", "High", "VeryHigh" ],
          "Default": "High",
          "Settings": {  // Settings could be defined in a separate file if desired like quality.graphics.setreg (see Settings example below)
            "q_shadows": [0, 1, 2, 3],    // q_shadows group cvar defined below
            "q_visibility": [0, 1, 2, 3]  // q_visibility group cvar defined below
          }
        },
        "q_shadows": { // Shadows Settings levels
          "Levels": [ "Low", "Medium", "High", "VeryHigh" ],
          "Settings": {
            "r_shadowResolution":[256, 1024, 2048, 4096] // actual shadow cvars
          }
        },
        "q_visibility": { // LOD/Visibility settings
          "Levels": [ "Near", "Medium", "Far", "VeryFar"], // different level names
          "Settings":{
            "r_viewdistance":[100, 256, 512, 2048] // actual visibility cvars
          }
        }
      }
    }
  }
}

Settings

Each setting is a CVAR (e.g. r_shadows, p_gravity etc.) and can be put in a group which denotes a cvar (e.g. q_graphics, q_physics, q_general) within a .setreg file at the address /O3DE/Quality/Groups/<group>/Settings
CVARs can be configured with flags based on the game needs to restrict who can change their settings and when.

File: O3DE/Gems/Atom/Registry/quality.setreg

{
  "O3DE":{
    "Quality":{
      "Groups":{
        "q_graphics":{
          "Settings":{
            "q_shadows":[1,2,3,4], // (compact form) graphics has 4 levels, so 4 values are defined, this compact form makes it easy to compare values for each level
            "r_sun":1, // (compact form) r_sun is 1 for all levels
            "r_example":{ "0":1, "Medium":2} // alternate form can target a level by index or name
          }
        }
      }
    }
  }
}

These registry files would exist in the Gems that provide the cvars, but can be overridden in the active project.

Settings can be overridden for specific platforms (e.g. iOS, Android, Linux etc) by placing the overrides in .setreg files in the appropriate PAL folder.

File: O3DE/Gems/Atom/Registry/Platform/Android/quality.setreg

{
  "O3DE":{
    "Quality":{
      "Groups":{
        "q_graphics":{
          "Default":"Low",
          "Settings": {
            "q_shadows":[0,1,1,1],  // lower shadow quality levels
          }
        },
        "q_shadows":{
           "Settings": {
            "r_shadows":[0,1,1,1] // no shadows at all on lowest
          }
        }
      }
    }
  }
}

Device attribute API

A DeviceAttributeRegistrar exists in AzFramework for registering device attributes and device attribute interfaces will be registered for model and RAM.
Device attribute names must be unique and are case-insensitive.

struct IDeviceAttributeInterface
{
    // get the name of the device attribute e.g. gpuMemory, gpuVendor, customAttribute42
    virtual AZStd::string_view GetDeviceAttribute() const = 0;
     
    // get a description about this device attribute, used for help text and eventual UI
    virtual AZStd::string_view GetDescription() const = 0;
 
    // evaluate a rule and return true if there is a match for this device attribute
    virtual bool Evaluate(AZStd::string_view rule) const = 0;
 
    // get the value of this attribute
    virtual AZStd::Any GetValue() const = 0;
};
 
class DeviceAttributeRegistrarInterface
{
    // register a device attribute interface, deviceAttribute must be unique, returns true on success   
    virtual bool RegisterDeviceAttribute(AZStd::string_view deviceAttribute,  AZStd::unique_ptr<IDeviceAttributeInterface> deviceAttributeInterface) = 0;
 
    // visit device attribute interfaces with a callback function   
    virtual void VisitDeviceAttributes(const VisitInterfaceCallback&) const = 0;
 
    // find a device attribute interface
    virtual IDeviceAttributeInterface* FindDeviceAttribute(AZStd::string_view deviceAttribute) const = 0;
};
using DeviceAttributeRegistrar = AZ::Interface<DeviceAttributeRegistrarInterface>;

Initial device attributes will be created for:

  • Device model
  • RAM
  • GPU Vendor
  • GPU Model
  • GPU Memory

Device-specific settings

Rules can be specified in Settings Registry files to set CVARs based on device attributes like the model, amount of RAM etc.
Rules are ECMAScript regular expressions or short LUA script that evaluates to true or false.

If a device matches multiple rules, the system will use the /O3DE/DeviceRulesResolution setting to determine how apply the rules. Possible values are:

  • "Min" use the lowest setting for all matching rules
  • "Max" use the highest settings for all matching rules
  • "First" use the first rule that matches, based on the order of the rules in the Settings Registry
  • "Last" use the last rule that matches, based on the order of the rules in the Settings Registry (DEFAULT)

A sys_print_device_rules console command will output all matching rules and their settings to the console to help developers debug device rules.

Warnings will be displayed in the console logs when a device matches multiple rules.

Custom rules can be written in LUA so developers can write rules that are difficult or impossible with regular expressions. LUA rules are enclosed within a dollar sign and open and closed parenthesis '$()'

Example: "apiVersion":"$((value > 1.2 and value < 2.4 ) and value != 1.5)"

Alternately, a more verbose option is to use an object format like:

 "apiVersion":{
     "Lua": "(value > 1.2 and value < 2.4 ) and value != 1.5"
  }

File: Registry/devices.setreg

{
  "O3DE":{
    "Devices":{
      // settings for the Graphics group for all devices that match rules in the external .setreg file
      "GPU low":{          // A human readable descriptive name for the device group
          "Settings": {
              "q_graphics":0 // the graphics quality level to use
          },
          "Rules":{        // apply the settings for all devices that match any of these rules
              "$import":"devices.gpu-low.rules.setreg"  // import rules from an external file
          }
      },
 
      // example of importing device settings/rules for Samsung devices
      "Samsung": {
         "$import":"devices.samsung.setreg"
      },
 
      // example of custom override rule
      "Custom": {         // A human readable descriptive name for the device group
          "Settings": {   // The name of the scalability group to use with the Level
              "q_graphics":0,       // the Graphics quality level to use
              "r_shadows":[0,0,1,1] // device specific settings overrides
          }
          "Rules":{   // apply the quality level and settings for all devices that match any of these rules
              "LG Devices": {"model":"^LG-H8[235]\\d"}, // regex model match
              "GeForce 6800": {"gpuVendor":"0x10DE", "gpuModel":"0x004[0-8]"}, // gpu vendor and gpu model regex
              "Adreno": {"gpuName":"Adreno.*530", "apiVersion":"Vulcan[123]"}, // gpu name regex with graphics api version regex
              "Experimental":{"$import":"devices.experimental.rules.setreg" } // import rules from an external file
          }
      },
      "LG XYZ Overrides": {
          // this example shows how you would override settings for a specific device match
          // without providing the overall quality level cvar
          "Rules":{
              "LG Devices": {"model":"^LG-H8[235]\\d"}, // regex model match
              "LUA example": {"apiVersion":"$value > 1.0 and value < 2.5"}
          },
          "Settings":{
              "r_shadows":0 // device specific settings override
          }
      }
  }
}

What are the advantages of the feature?

  • Meets the requirements
  • Uses existing systems (CVARs, Settings Registry, PAL)
  • Data-driven settings, groups, rules
  • Flexible and customizable

What are the disadvantages of the feature?

  • Additional processing of quality settings and device rules on application start up.
  • Developers must learn how to use the new system.

How will this be implemented or integrated into the O3DE environment?

The bulk of the implementation is explained in the technical description section. Developers with existing projects will need to copy and customize device rules and quality settings registry files from the default template into their own project and customize them.

Are there any alternatives to this feature?

  1. Re-introduce the Lumberyard .cfg spec system.
    1. This system is known to work, but all of the specifications and CVAR setting would need to be updated to work with O3DE.
    2. The primary reason this option was not selected is it relies on hard coded specification detection and does not use the Settings Registry.
    3. Settings for multiple platforms are intermingled instead of using PAL to separate them.
  2. Use a simple hard-coded low/medium/high/very-high quality system.
    1. Because O3DE is a modular engine with uses outside of gaming it is unlikely that a "one size fits all" approach will suit the needs of all developers. And making changes would mean the developers would need to modify c++ code and recompile the engine.
  3. Port the Lumberyard spec .cfg system and device .xml and .txt files into Settings Registry.
    1. This is a similar amount of work and less flexible to the system that is proposed, the main time/effort savings would be we would only support REGEX for device rules and wouldn't need to have debugging CVARS, but we'd have the limitations of the Lumberyard system.

How will users learn this feature?

Documentation will be available on the main website and in comments inside the Settings Registry files.

Are there any open questions?

  1. What is the best name for this framework/system? 'Quality' or 'Scalability'?
  2. What are the best locations in the Settings Registry for the system?
  3. Is there a better/simpler way to define device rules than using REGEX and LUA?
  4. Should this system also let developers specify asset build quality settings?

Proposed SIG-Core meeting agenda (Date:21/JUL/2021/Time: 0900 PST, 1700 BST)

Meeting Details

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

  • Discussed SIG Charter and agreed it was too broad.
  • Consensus was that a mission statement would help narrow the focus of the charter

Meeting Agenda

  • Discuss any proposed mission statements: #9
  • Discuss coding standards: #5
  • Continue discussion about SIG charter

Please list any additional items you would like to add to the agenda in the comments.

Proposed SIG-Core meeting agenda for 2022-06-DD

Meeting Details

  • Date/Time: Month Date, Year @ 14:00pm UTC / 12:00pm EDT(Eastern Daylight Time)
  • Location: Discord SIG-Core Voice Room
  • Moderator: @lumberyard-employee-dm
  • Note Taker @lumberyard-employee-dm

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

AR Related Questions

  • What is SIG-Core Contributors opinions on the purpose of the development branch?
  • How stable is AR for SIG-Core contributors? Is AR stability impacting the ability to contribute?

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Proposed RFC Feature O3DE as an SDK

Overview

In order to lower the bar of technical expertise to using the O3DE as well as to reduce build times of users not modifying the core Engine, the O3DE as an SDK feature was proposed. The primary goal of this features is to allow users to use and distribute the engine with pre-built libraries and applications, that can allow for working on a Game or Simulation Project without the need to build the engine from source code. The SDK will contain the Source Assets, configuration files, cmake build scripts, library and binary artifacts in a layout that is relocatable and transferable to other users. Furthermore it provides a mechanism for registering multiple engines on a user machine that can be associated with O3DE Projects that are also registered.

This effort involves a fundamental shift in the way developers use the engine.
Previously Lumberyard developers had to store their Game Projects within the Engine Source directory and build their game from there. This inherently make the engine mutable and therefore non-distributable.
Organizations, studios and users were not able to share their Game Projects among each other without also providing that specific build of engine as well.
In this manner the Lumberyard Engine was the consumer of Game Project. This will be called the engine-centric workflow.

To accomplish the goal of using O3DE as an SDK, there needs to be support for the O3DE Game Projects to instead use the engine instead. Therefore a new workflow will be added which will be called the "project-centric" workflow.

Summary of Work Already Done

The following is a summary of work that has already been completed for the O3DE as an SDK effort before the public release of O3DE

  • Provided users with a much smaller package to download and start working with the O3DE engine. Previously, distributing Lumberyard involved making a package that contained the build artifacts for every project within the engine source directory, but also additional binaries and scripts (/Tools) that weren't needed to run the Engine.
    Included in this package was standalone applications that had to be run to download all the 3rdParty libraries (SetupAssistant), as well as additional Game Projects, sometimes containing legacy file structure, along with their large source assets. Frankly most of these things were not needed by users creating their own project.
  • Provided support for storing Projects and Gems externally from the Engine.
    This opens up the possibility of distributing Projects and Gems individually and can drive the adoption of a community repo of shared Projects and Gems.
  • Reduction of build times for developers and content creators.
    Builds take minutes instead of hours as only the Project code and any Project registered gems need to compile. The slowest part of interacting with the Engine should be link time.
  • Allow users to create their own SDK layouts of the Engine that can distribute to their user/customer/employee base(Windows).
  • Provided python package for registering Engines, Projects, Gems and Templates,
    Also provided commands for creating projects and gems out of templates and transform projects and gems to templates.
  • Provided a way to associate an Engine with a Project, to allow a project to not be tied to an engine location on a specific machine.
    Different users are able to install their engine anywhere on the local machine and use that registered engine with the shared Game/Simulation Project as long as it contains the same engine identifier.
  • Provided commands to register Gems with either a Project or Engine, and provided a way to enable/disable Gems within a Project.

The engine installer(O3DE_installer) effort will wrap the work proposed in this RFC to provide the user with an application that can install the O3DE SDK as well as any necessary dependencies(cmake, python, etc...).
Furthermore the Project Manager will wrap the O3DE python package script provided by this effort in a GUI that allows users to manage their Projects, Gems and Templates.

Summary of Upcoming Work

This provides a short summary of the upcoming work of the O3DE as an SDK effort in the next milestone.

  • Add support for users to create an SDK layout for Linux
  • Add support for users to create an SDK layout for MacOS
  • Add support for users to create an SDK layout for iOS
  • Add support for users to create an SDK layout for Android.
  • Add support for creating an SDK layout out of a Project-Centric Source-Engine workflow.
    The SDK layout should contain both the Project and Engine build artifacts along with the Source Assets, build and configuration scripts and other metadata.
  • Investigate the ability to create a hybrid SDK that can both can contain host-platform Tools(Linux, MacOS, Windows) and non-host platform SDK layout files(Android and iOS).
    This will help ease the development process of iOS and Android developers wanting to work with a single O3De SDK to build their Android or iOS game
  • Provide scripts for users to create creating a Project Game Release layout, which includes the processed assets, configuration files, loadable archive files and the launcher applications.
    This is to provide a mechanism for a Game/Simulation Team to provide a bundled all the necessary artifacts needed for their end-user's to run their game/simulation.

Business Need

Feedback from Users of Lumberyard have indicated that getting from download to a usable Editor is quite long and causes users to lose interest in the product. With the O3DE as an SDK effort, it allows Studios, Game Designer, Artist and Novices to use O3DE without the barrier of needing to perform long builds of the Engine in order to author a game or simulation.
There is also a need for consistent workflow for teams to create the layout needed to build & deploy their final game builds with the appropriate processed assets and configuration files. Moreover the O3DE as an SDK effort opens up possibilities for online communities where organizations and individuals can share their own Projects, Gems and Templates. It provides a mechanism for Gem authors to offer pre-built versions of their gems to users reducing iteration time for users wanting to experiment with a gem.

Feature Breakdown

So far the O3DE as an SDK effort is already through 2 Milestones.
The task for in those milestones are listed in the Appendix Below.
The High-Level work needed for Milestone 3 is as follows

Milestone 3 - Linux, Mac, iOS and Android Install Layout Support, Project Release Layout Creation, Hybrid SDK Investigation

Milestone 3 Task Description
Create an SDK Layout for MacOS Make sure an SDK layout can be created on MacOS for all supported configurations and that the SDK can be used to build a project
Create an SDK Layout for Linux Validates an SDK layout for Linux is supported for all supported configurations and that the SDK can be used to build a project
Create an SDK Layout for iOS Add support for an SDK layout that can be used to build an O3DE Project targeting iOS. This task doesn't imply that the SDK contains the MacOS tools needed to iterate on a Project such as the Editor, AssetProcessor
Create an SDK Layout for Android Implement changes to allow an SDK layout targeting the Android Platform to be created(at the very least on Windows). Also this task would need to verify that a GameLauncher can be built and run on an Android device using the SDK
Support SDK Layout Creation in a Project-Centric Source-Engine workflow This kind of SDK layout can be used to iterate on the project, but it is not meant to be distributed to the end-user(Player). This is to facilitate a workflow where a team of developers can work in on a Project with the Engine-Source code and then provide a pre-built SDK for their Artist/Game Designers to speed up iteration
Add support for creating a Project Release layout with the bundled assets. This involves adding a CMake step that will allow users to not only build the all Project's build artifacts, but also process the necessary assets for the running the project, bundling those assets using the Asset Bundler and copying the build artifacts with the bundled assets to a "Project Game/Simulation release" layout folder. This layout will contain all the files that O3DE can provide that will run the Game on an end-user machine.
Investigate the creation of a Hybrid SDK Layout An SDK for non-host platforms such as Android and iOS, will also need a subset of the SDK from the host platforms of Windows, Linux and MacOS. In order to process Assets for Android, an Asset Processor is needed on Windows, but not necessarily the libraries needed to build Windows applications
Add Support for creating an SDK Layout for Monolithic builds Users will be able to create SDK layout an Engine configured to build monolithic binaries. The 'release' configuration will be officially supported, while 'debug/profile' configuration could work, but will not receive official support status. Monolithic builds are normally used in the final released game executable to avoid the need to load shared libraries at runtime and to optimize the size of the game package

Optional Work

This is additional task that can make it easier for teams/studios to develop a game or simulation product using the SDK, but not strictly required.

Optional Task Description
Support SDK Layout Creation in a Project-Centric SDK-Engine workflow This is a special case of having studio or team being able to take a pre-built Engine SDK layout, using that to create a project, and then using the Project-Centric workflow to then create an SDK layout containing the project build artifacts + Engine build artifacts. Such an approach is meant for distribution within a game team or studio working on a project. This isn't strictly required as it is expected that most game teams would need to modify the engine

Dependencies

The remaining work in the O3DE as an SDK effort would allow users to use the Project Manager application on Linux and Mac for project management using a UI.
This will also unlock the ability for the Installer team effort or some other initiative to create packaging or installer for installing the O3DE SDK on Linux or Mac.

Scope

The following table details what works is in-scope for the O3DE as an SDK effort and what is out of scope

In-Scope
Registering projects, gems, and engines
Querying of information from registration manifest files(o3de manifest)
Creation of templates from a directory structure and instantiation of templates to location on the filesystem
Creation of an SDK Layout using CMake.
Creation of a "Project Release Layout" that contains all the artifacts needed to run a game on platform
Out of Scope
This effort only covers the work to support users (Game Studio, individual, O3D Foundation) being able create an SDK layout.
How the O3D Foundation delivers an pre-built SDK to users is part of the Installer work being done by the sig-release group
Currently there exist an installer executable for Windows, Any plans for an installer for Mac(perhaps a .dmg) or a package for Linux is outside the scope of this effort.
Linux especially so, since the package system is tied to the Distribution(ie. Yum, DPKG, apt, pacman), with differing package formats(.deb, .tar.xz, .rpm, etc...)
Also out of scope is the creation of a GUI tool for configuration and registration of Projects, Gems and Templates.
The Project Manager application(O3DE.exe) which is owned by the Project Manager work being done by the sig-release group.
That work wraps the O3DE python package which provides this functionality.
Fixing any issues related the AssetBundler if it has issues gathering necessary Asset Dependencies for a project release layout.
Any AssetBundler issues will be tagged to the sig-content group

Questions and Concerns

IDE Debugging(Visual Studio/Xcode)

The ability to debug SDK applications in popular IDEs is an important aspect of driving users to use the Project-Centric SDK workflow over using the Project-Centric Engine Source workflow.
Particular the Editor and AssetProcessor applications would benefit if they appeared as targets within the CMake Build System. This would allow applications like Visual Studio and Xcode to have a target that would allow users to debug those targets.
Other tools for debugging SDK applications such as WinDBG, lldb and gdb don't need to have such an IDE target are not a concern, since the users using these tools have the expectation that they know how to find and attach an application using these tools from the Command Line.

### Questions

  • Where will debugging symbols be stored for the pre-built binaries of the SDK Layout?
    Also how will users obtain debugging symbols for downloaded copies of the SDK layout?
  • On Windows PDB files are needed for debugging. How can users obtain them?
    The PDBs could be uploaded an O3DE the Symbol Server or there could be a separate downloadable package with just the .pdb files within it.

### Concerns

  • The availability of Source Code debugging of the SDK Layout.
    The commits in which an SDK layout was built against would be useful, for other users to determine which git commit hash is needed to provide source level debugging of the SDK.
    The MSVC debugger, lldb, gdb, and WinDBG all support specifying a path to source code for debugging. So users would need to know the matching commit to get accurate debugging.

Hybrid Layout

Support for a Hybrid SDK layout of having the host platform build artifacts with non-host platforms build artifacts are needed for full use of the SDK for non-host platforms, but how this is tackled is not defined yet.
By Hybrid SDK, what is meant is distributing an SDK layout on Windows that can be used to build an O3DE Project for Android and therefore needs the Android build artifacts(shared library .so and static library .a files), but also requires the Windows Editor.exe and AssetProcessor.exe binaries in order to author content for the O3DE Project and to process the Assets for the Android platform.

### Issues

  • Because O3DE uses the CMake build generator, only one platform at a time can be configured and built into any specific build folder.
    This means that to build an SDK layout for a platform such as Android or iOS, requires at least building a subset of the SDK layout for the host platform(Windows, Linux, Mac) and adding that to the layout, followed, by building the Android or iOS platform itself and copying it to the layout.
    As these are isolated build solutions, the process of creating the SDK layout is no longer a single INSTALL step, but multiple for each platform.

  • Furthermore the INSTALL step also supports a single platform at a time, so in-order to create a complete install layout for a single platform the INSTALL step must be run three times

### Concerns

  • Having a hybrid layout also implies an increase in the overall Install Layout Size.
    It has been seen on Linux that just building an SDK Layout for Debug is upwards of 100GiB.
    Therefore if for some reason in the future a Linux SDK Layout, that contained support for Android was built, it could be quite large a package for download.
    Mitigation plans should be drafted to try to reduce the binary size build artifacts with the SDK, while still allowing for debug-ability of the SDK layout for users.

Linux Cross-Distribution Support

Linux sits in an area, where there are multiple different distributions with different packaging systems, tweaks to the kernel, and OS specific APIs that makes distributing an SDK layout created on one OS not viable for other OS outside of that family.(Arch vs Debian vs RedHat vs Fedora vs etc...).
All the 3rdParty libraries are are copied into the SDK Layout is copied from the runtime dependencies of the build targets within the Engine. Using only the 3rdParty libraries that are part of the 3rdParty package system should work in most cases, but depending on how old the Linux kernel is for that distribution, some newer core libraries might not be available.

Testing will only occur on Ubuntu 20.04 LTS

Any testing of an SDK layout created on Linux will be done only for Ubuntu 20.04 distribution.
The community will be empowered to see how well the O3DE as an SDK works on other platforms.

### Issues

  • Because Linux supports multiple distributions, building an SDK with one distribution, might result in libraries that users versions of glibc or libstdc++ that are newer than the versions that part of the "supported distribution list" of O3DE.
    Chances are building O3DE as an SDK on ArchLinux, will not work on Ubuntu 20.04 due to ArchLinux having newer versions of kernels and core libraries installed
  • None of this work covers an SDK Layout on an ARM architecture. The Install layout logic doesn't take any architecture into account, so that running the INSTALL step to create a layout should work regardless of running on ARM Linux, ARM Mac, ARM Windows.
    The issue however is that none of those platforms have 3rdParty libraries built for them. and therefore the build step would fail

Mac ARM64/x86_64 Compatibility

With the introduction of the Mac M1, it appears that Apple is moving towards using ARM architecture for all their platforms going forward.
The Mac M1 also comes with a Rosetta Translation Environment that is used to translate applications which only contain x86_64 instructions to ARM64.
This means that Mac M1 machines can support running ARM64 programs natively and x86_64 programs through emulation, while older Mac x86_64 machines can only support running x86_64 programs.
Xcode does support creating a Universal Binary MacOS binary on both architectures, therefore it is possible to target build a universal binary release of O3DE on any of the MacOS machines at that moment.

### Questions

  • Should universal binary be made for MacOS or should only the newer ARM64 architecture be targeted when building an SDK layout?
    • Universal binaries would be larger in size increasing the SDK layout overall size.

### Issues

  • All of the Current 3rdParty libraries for MacOS is built with x86_64, not ARM64.
    In order to get full native support on newer Mac M1 machines, those libraries would need to be recompiled for the ARM64 Architecture

Engine Versioning

Version of the O3D Engine itself is an unsolved problem. This is exacerbated even more when using a upgrading a pre-built SDK to a newer version.
Currently the engine specifies the version in the engine.json file as "O3DEVersion", but that value is always "0.0.0.0" at the moment.
Furthermore the project.json does not specify which version of the engine it desires to use.

### Questions

  • How should the o3de python registration scripts account for versioning registering a project?
    • Should users be able to specify a single value, a range, can it accept non-numeric values?
  • Should CMake prevent configuration when a project is associated with an engine out side of the version range the project specified?

### Concerns

  • As Game Development is an iterative process that involves modification of the engine, there will be Game Teams that build a new Engine SDK based on those changes.
    In those cases, the range of valid version a Game Project specifies cannot be based on the commit id as those might not be in sequential order depending on the source control system being used.
  • The versioning scheme will not take the source control system into account and therefore cannot account for ranges of commits.

Stakeholders

Group Rationale
sig-build Would want to know the changes to CMake build system to support creating a SDK and Project Release Layout.
Also would need to be appraised of any build pipeline changes needed for building a layout on Jenkins
sig-core The O3DE as an SDK work requires changes to core systems(Engine/Project Registration, Gem System, Template creation system).
Also the group responsible for implementing the O3DE as an SDK effort
sig-platform The O3DE an SDK effort is targeting multiple OS which will require additional platform specific files to be added to the layout.
Furthermore deploying a GameLauncher to specific platform requires bespoke knowledge of how to run an application on specific devices
such as Android and iOS
sig-release(Installer and Project Manager efforts) Currently working on an installer for O3DE that allows users to not have to download O3DE repo and build the engine from source.
Also maintains the ProjectManager application which exposes project and gem management as it related to the a registered engine.
Builds on top of the work done by the O3DE as an SDK effort
sig-testing The default workflow of creating an SDK creates a layout an SDK Layout without the internal AutomatedTesting project.
Being able to run the python test scripts located in the AutomatedTesting project would be useful in validating the Editor workflow performs
the same as in a source engine.

Appendix

Completed Work

The following is the work already completed on the O3DE as an SDK Effort

Milestone 1 - External Project and Gem Support, Manifest Registration system (Completed)

Milestone 1 Task Description
Project-centric workflow In this workflow, the project ingests the engine through CMake instead of the old Engine-centric workflow where the reverse was true. Now CMake can be run using the Project Root as the Source Directory
External Projects Refactor of the engine source code and build scripts to be able to specify and use a project located outside of the Engine Source directory.
External Gems Addition of CMake hooks(LY_EXTERNAL_SUBDIRS) and manifest logic to allow users to register a Gem externally to Engine Source directory.
Project and Gem Templates Creation of the new Project and Gem Templates that uses new layout different from 1.X to organize project's and gems and work with the CMake build system and Settings Registry
Manifest Registration System o3de python package scripts that can register gem, project, templates and external subdirectories paths with either a global user manifest(~/.o3de_manifest.json), a engine manifest(engine.json) or a project manifest(project.json). The Manifest allows projects to identify an engine to use in a system independent manner. This allows external projects on one user machine to user a different location for the engine as other users.
Template Creation System A different subset of o3de python package scripts that can take a registered template and transform it to a Gem, Project or general folder with placeholders replaced

Milestone 2 - Windows Install Layout Creation, Gem Naming Convention and Dependency Tracking (Completed)

Milestone 2 Task Description
CMake Install Process - Windows This involves the marking of build targets, folders and files within CMake to determine which files should be copied over to the install layout
Generation of the Install Layout CMakeLists.txt The install layout requires CMakeLists.txt files that defines the pre-built libraries and executables, that can be referenced by an O3DE Project for use when building that project. Any target that is build from source using the Source Engine, needs to be available as an IMPORTED target that references the build artifact in the SDK Engine
Ensure that non-build systems files that are needed to use the Engine as relocatable SDK is copied to the Install Layout To actually use the Engine in any reasonable manner, a multitude of additional files are needed. The <engine-root>/Assets folder needs to be copied to the Install layout in order to access the Core Source Assets. Each Gem that is registered with the Engine may contain also an Assets folder exposing Source Assets that available when that Gem is enabled. The <engine-root>/scripts folder is needed to copy over the O3DE Python Package Scripts which is used for project management and Asset Bundling. The <engine-root>/Registry folder is needed to copy the default settings that the SDK Engine uses when running. Furthermore each Gems Registry folder needs to be copied over for using Gem specific settings. Finally the <engine-root>/python/get_python.bat scripts are need to make sure the SDK layout is able to download a copy of python 3.7 that isolated for use with the Engine and to install the site-packages used by the engine(assimp, pyside2, pybind11, o3de)
Locating Projects Gem Modules to load from the Install layout Applications The Pre-Built applications that come with the install layout, such as the Asset Processor and Editor needs to be able to locate the list of Gem Modules to that the Project desires to load
Gem Naming Convention Having a Gem Naming Convention eases not only the logic to enable/disable gem by tools(o3de package enable_gem.py/disable_gem.py), but also allows users to no longer need to specify the specific build target when enabling/disabling a gem. The Gem Name can just be used for the gem configuration instead of the CMake target. This also reduces the simplifies on the o3de python package scripts logic to parse CMake scripts to determine which Gem are enabled. Furthermore having a consistent naming convention allows Gem Authors to easily segment which types of Application a Gem is used for into "Variants". Variants are aliases which aggregate 0 or more CMake build targets within a Gem to expose a consistent name. The initial list of variants are "Clients", "Servers", "Tools" and "Builders". "Clients" variants are used within the Game Launcher. "Server" variants are used within the Server Launcher. "Tools" variants are used in tooling applications such as the Editor, MaterialEditor and ShaderManagementConsole. "Builders" variants are used in asset processing applications such as the AssetProcessor(Batch) and AssetBuilder

SIG Reviewer/Maintainer Nomination: @amzn-phist

Nomination Guidelines

Reviewer Nomination Requirements

  • 6+ contributions successfully submitted to O3DE
  • 100+ lines of code changed across all contributions submitted to O3DE
  • 2+ O3DE Reviewers or Maintainers that support promotion from Contributor to Reviewer
  • Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Maintainer Nomination Requirements

  • Has been a Reviewer for 2+ months
  • 8+ reviewed Pull Requests in the previous 2 months
  • 200+ lines of code changed across all reviewed Pull Request
  • 2+ O3DE Maintainers that support the promotion from Reviewer to Maintainer
  • Requirements to retain the Reviewer role: 4+ Pull Requests reviewed per month

Reviewer/Maintainer Nomination

Fill out the template below including nominee GitHub user name, desired role and personal GitHub profile

I would like to nominate: @amzn-phist, to become a Maintainer on behalf of sig-core. I verify that they have fulfilled the prerequisites for this role.

Reviewers & Maintainers that support this nomination should comment in this issue.

Proposed RFC Feature New O3DE Archive System

Summary:

To add a modern and performant archive format to O3DE, that can overcome the existing limitations of the current .pak file format based on CryPak.

Newer specs for storage technology such as NVMe.4 and NVMe.5 drives has led to the hardware being able to support throughput speeds of 16GiB/s in 4-lane configuration which is common for storage drives. Speeds of up to 64GiB would be supported in a 16-lane configuration, but that is not common nor affordable among consumer level storage drives.

Currently CryPak is decompression limited to zlib-based compression algorithms. Alternative compression libraries such as Oodle and Snappy. Furthermore the compression is file based, instead of block based, which makes it hard to parallelize decompression and reads on multiple threads.
Currently within the AZ Streamer stack decompression of files from .pak files is the bottleneck when it comes to throughput.
Finally the CryPak that O3DE uses only supports 32-bit zip files and have a 4GiB limit on the archive file itself.

The new O3DE Archive format will address these issues, by providing a Compression gem that allows performing block based decompression, whille also supporting 64-bit offsets to allow for archive files of up to 172 TiB.

The full RFC for the new Archive TDD is linked at https://github.com/o3de/sig-core/blob/daa015f/rfcs/rfc-core-2023-04-20-new-archive-tdd.md

RFC - ROS2 Gem

Summary:

O3DE will support robotic simulation through a dedicated Gem, providing components and tools which make it easier for ROS2 developers to migrate their projects to O3DE.

What is the relevance of this feature?

ROS2 is de facto standard in robotics, with a substantial and quickly growing user base. It supports numerous use-cases in robotics, both academic and industrial, and a variety of platforms such as warehouse logistics robots, agricultural robots, airborne drones, underwater platforms and autonomous vehicles. ROS2 Gem will encourage robotic communities to use O3DE.

Feature design description:

ROS2 Gem RFC can be seen in two perspectives:

  1. A bare minimum Gem - such as proposed in this PR.
  2. Feature set we would like ROS2 Gem to have. Some features have already been proposed (see this query). Each of these issues could benefit form SIG comments.

We can start with (1) first.

Features for the bare minimum Gem:

  • Cmake tool for ROS2 targets (projects can build with ROS2)
  • System component which includes
    • Simulation clock (source of time for ROS2 simulated systems)
    • Central ROS2 node (basic ROS2 entity)

Discussion for (2) is very important as well and I would like to put that on the radar. The ROS2 Gem will include a number of components such as:

  • Sensor components (Lidars, Cameras, IMU, GPS etc).
  • Robot control components

These will provide useful abstractions to be extendable with user types and custom implementations.
The Gem should also include a number of Assets to start with (environment and robot models).
The Gem should also include tools and extensions such as importing URDF files (and support in Asset Processor).
A robotic simulation project template would be useful as well, as well as a demo/tutorial scene.

Technical design description:

For a bare minimum:

  1. ROS2 Gem includes a simple cmake function for project targets (which ROS2 packages does your target depend on?)
    • This allows the user to specify which ROS2 packages to include. Custom messages are a good example of a package that users would want to use with O3DE on the project level.
    • It should respect ROS2 packages visibility in a typical ROS2 fashion (through environment sourcing).
    • It will detect whether ROS2 is installed and sourced while building the Gem, warns and returns if this is not the case
  2. ROS2 Gem includes a simulation clock, which is necessary to correctly timestamp ROS2 messages produced by the simulation. Time source should work well with non-real time modes.
    • The clock publishes on /clock topic
    • The Gem system component should expose time-stamping interface
  3. ROS2 Gem provides an O3DE system component responsible for ROS2 initialization (rclcpp::init), creating the main simulation node, and listening for inbound ROS2 communication (rclcpp::spin).
    • For most users, a single-node approach is optimal, but users are not prevented from creating their own nodes.

What are the advantages of the feature?

Solution for common robotic simulation needs. Users don't need to implement the entire framework repeatedly. ROS2 community is attracted to O3DE.

What are the disadvantages of the feature?

To respect the ROS2 way, the environment needs to be sourced before the build.

How will this be implemented or integrated into the O3DE environment?

Since this is a separate Gem, the integration is quite straightforward.

Are there any alternatives to this feature?

Explain what the impact might be of not doing this.

O3DE would not be a good choice for robotic simulation.

Note that other game engines already have solutions for that.

How will users learn this feature?

  • A design documentation for the Gem will be provided as the development progresses
  • ROS2 developers will familiarize themselves with O3DE through existing documentation
  • A tutorial oriented towards this specific group and their needs could be very useful

Are there any open questions?

  • What are the most useful sub-features for this Gem and their priorities?
  • How can we most efficiently provide robotic Assets for simulation users?

Update Best-Practice Guide to include guidance on reducing compile time cost

To help improve O3DE C++ build times, it is recommended to update the C++ Best Practice Guide, with options on how to best reduce compilation time going forward.

There are several ways to reduce O3DE compile going forward.
One of which is reducing the amount of non-template inline functions.
Other ways include

  • Using type erasure via function pointers or AZStd::functions that can created in functions templates, but then be later used in a non-templated function implemented in a single translation unit to reduce compilation time.
  • Using forward declarations when possible. For example including either the BehaviorContext, SerializeContext or EditContext headers in .h file can almost always be avoided with a forward declaration. The Reflect function can be implemented in cpp files
  • Use the boilerplate macros which only declare functions for opt-in to structures such as O3DE Allocators, O3DE TypeInfo and O3DE RTTI.
  • Be careful with inner classes as they cannot be forward declared without the including the header of an outer class
    • For example declaring a SerializeContext VersionConverter function as follows
      static bool VersionConverter(AZ::SerializeContext& context, AZ::SerializeContext::DataElementNode& classElement);
    requires the SerializeContext.h header to be included, as the DataElementNode inner class can't be forward declare without the class definition for the SerializeContext class.

Useful macros are

  • AZ_TYPE_INFO_SPECIALIZE_WITH_DECL - most recommended way to add TypeInfo support. This only adds two additional free functions declarations of GetO3deTypeName and GetO3deTypeId.
  • AZ_TYPE_INFO_WITH_NAME_DECL - Less recommended way to add TypeInfo support. This adds an additional 4 function declaration total.
    Two non-member friend functions of the GetO3deTypeName and GetO3deTypeId functions to the current namespace containing the class.
    Two static member functions of TYPEINFO_Name and TYPEINFO_Uuid that can be used to query the name and uuid associated with a class
  • AZ_TYPE_INFO_SPECIALIZE_WITH_IMPL - The counterpart of the AZ_TYPE_INFO_SPECIALIZE_WITH_DECL which defines the two functions GetO3deTypeName and GetO3deTypeId functions.
  • AZ_TYPE_INFO_WITH_NAME_IMPL - The counterpart of the AZ_TYPE_INFO_WITH_NAME_DECL which defines the 4 functions of GetO3deTypeName and GetO3deTypeId (non-member friend functions) as well as TYPEINFO_Name and TYPEINFO_Uuid (static member functions)
  • AZ_RTTI_NO_TYPE_INFO_DECL - Declares the 10 RTTI functions(6 virtual and 4 static member) that allows a class to opt-in to RTTI
  • AZ_RTTI_NO_TYPE_INFO_IMPL - Adds the definitions for the 10 RTTI functions. It pairs with the AZ_RTTI_NO_TYPE_INFO_DECL function.
  • AZ_CLASS_ALLOCATOR_DECL - Declares the operator new and operator delete static members of a class to allow opt-in to use the AZ Class Allocators
  • AZ_CLASS_ALLOCATOR_IMPL - Adds the definitions the operator new and operator delete static members to allow class to use the AZ Allocators for allocating it's class layout when using new and delete

Other Recommendations:
In order to iterative improve build times, the following is recommended

  1. instead using the AZ_TYPE_INFO/AZ_RTTI macro directly within the declaration of a class in a header, the AZ_TYPE_INFO_WITH_NAME_DECL and AZ_RTTI_NO_TYPE_INFO_DECL macros are used to declare the TypeInfo and RTTI functions without defining them.
    In a cpp file the AZ_TYPE_INFO_WITH_NAME_IMPL and AZ_RTTI_NO_TYPE_INFO_IMPL macros can be used to define the TypeInfo/RTTI functions only once.
  2. Avoid adding inline definitions of functions in headers and instead aggressively move function definitions to translation units(.cpp files). Non-templated inline function bodies are compiled everytime they are included and it adds ups over time
    For example the PoolAllocator code was ranked for 4-7 in template functions that took the most time to instantiate contributing about 580 seconds(8 minutes 40 seconds) of compilation time.
    Now that time can get parallelized over the number of cores building at once, but even with 8 cores this adds up to over a minute
155581 ms: AZ::Internal::PoolAllocatorHelper<AZ::PoolSchema>::RTTI_IsContainType (1117 times, avg 139 ms)
143757 ms: AZ::Internal::PoolAllocatorHelper<AZ::PoolSchema>::TYPEINFO_Uuid (1117 times, avg 128 ms)
143698 ms: AZ::Internal::PoolAllocatorHelper<AZ::PoolSchema>::RTTI_Type (1117 times, avg 128 ms)
136643 ms: AZ::Internal::AggregateTypes<AZ::PoolSchema>::Uuid<AZ::CanonicalType... (1117 times, avg 122 ms)

With the changes to move the PoolAllocator instantiations and TypeInfo functions to its cpp file, those functions now longer appear in the top 10000 templates that are instantiated.

Proposed RFC Feature Engine, Project and Gem Versions

⚠️ UPDATE 1/27/2023

Summary:

A versioning scheme is needed to determine code compatibility between engines, projects and gems. Currently, only the engine has a version scheme tied to 6 month releases which is not granular enough.

Version and dependency information will be added inside engine.json, project.json and gem.json and relevant tools will be updated to use this information. Version information will be incremented to indicate API or other relevant changes.

What is the relevance of this feature?

Versioning and dependency information will allow users and tools to make informed decisions regarding compatibility. Given the distributed nature of O3DE, an established version and dependency scheme is the only scalable way to allow de-centralized control over compatibility.

Feature design description:

The following changes will be made

  1. Semantic version fields in the format <major>.<minor>.<patch> (e.g. 1.21.9) will be added in engine.json, project.json and gem.json

    • <major> is for API-breaking changes
    • <minor> is for non-API-breaking changes that add new APIs or change them in a non-breaking way
    • <patch> is for all other non-API-breaking changes, usually important fixes
  2. Dependency fields containing lists of dependencies in the format <name><version specifier> (e.g. o3de >= 1.19.1) will be added in project.json and gem.json where version specifiers are compatible with PEP 440 so we can use existing Python versioning functionality. If a version specifier is omitted for a gem dependency, the project's engine version will be used to determine the latest compatible gem version. *IMPORTANT* Version specifiers indicate "known compatibility". This means, there isn't a way with the proposed versioning system to specify "known incompatibility". We use the version specifiers to recommend the most compatible gem, but we warn users when they attempt to use a version that is not listed as compatible.

  3. CMake will make #defines available with version information for compile-time compatibility control. e.g. EDITOR_VERSION_MAJOR, EDITOR_VERSION_MINOR, etc.

  4. The o3de.py CLI and Project Manager and CMake will take into account version and dependency specifications to display version information, determine compatibility and when a project needs to be re-compiled. The UX changes for these tools are not part of this RFC.

Workflows:

  1. During O3DE engine development in the development branch, the gem_version, engine_api_versions and engine_version will be updated as important changes are made.
    • When developers make a change to an API version in the engine or gem that ships with the engine, they will also update the engine_version. For example, if they change the minor version of a gem and zero out the patch version, they should also increase the minor version of the engine_version and zero out the patch version.
    • In the future, when a GitHub action exists to update the engine_version, developers will no longer need to manually update that field.
  2. When a stabilization branch is created in preparation for a release it should reflect version information that the main branch will have when stabilization is merged to main.
    • The engine_version minor version should be immediately incremented in development after creating the stabilization branch so there is less time that the two branches share the same engine_version value.
    • The engine_display_version in engine.json should be set to the appropriate YY.MM.XX release version prior to merging to main, preferably the last change submitted to the stabilization branch before merging to main to avoid accidentally merging this value back to development.
    • When merging from stabilization to development branches, developers should be careful not to bring the engine_display_version or engine_version from stabilization into development.
    • When merging changes from development to stabilization branches, developers should be careful to only include appropriate version changes. For example, if the major gem version was bumped in development but you're only merging over a patch fix for that gem into stabilization - do not bump the major gem version, just the patch.
  3. The major and minor engine versions in the stabilization branch should not change because only bug fixes should be merged to the branch which should never result in more than the patch version changing.
  4. When stabilization is merged to main it should have the correct engine_display_version.

Technical design description:

The versioning system will be backward compatible and optional. If there is no versioning or dependency information available the system will fallback to not enforcing compatibility, but will warn the user.

JSON file changes

engine.json is modified so we have version fields for release and development and a new engine_api_versions field is added so gems can depend on specific versions of APIs inside the core engine.

{
    "engine_display_version":"22.05.1",      // rename O3DEVersion field, set to 0.0.0 in development
    
    "engine_version":"1.0.1",    // use engine_version for all compatibility checks
 
    "engine_api_versions": {                 // versions of general APIs provided by the core engine (not gems)
        "editor":"3.1.4",
        "framework":"2.0.0",
        "launcher":"1.0.0",
        "tools":"4.0.0"
    },
    ...
}

project.json will now include a project and engine version and fields for engine and gem dependencies. Also, a compatible_engines field is added as a simple way for project maintainers to indicate known good versions of the engine (and gems) their project is compatible with.

{
    "project_version":"1.0.0",   // not needed for dependencies, but useful for users and added for consistency

    "engine_finder_cmake":"cmake/EngineFinder.cmake", // path to cmake script used to find the engine

    "engine":"o3de",             // engine_name this project was registered with
    "engine_version":"10.1.4",   // engine version this project was registered with
       
    "compatible_engines": [      // if empty (default) or missing, the project is assumed compatible with every engine
        "o3de>=1.0.0",           // project is compatible with any o3de engine greater than or equal to version 1.0.0
        "o3de-install==1.2.3"    // project is ALSO compatible with the o3de-install engine version 1.2.3
    ],
                               
    "engine_api_dependencies": [  // declaration of dependency on engine api versions, defaults to empty
        "framework~=2.0.0"
    ],
 
    "gem_dependencies": [        // rename "gem_names" to "gem dependencies" and support optional version specifiers
        "example~=2.3",          // project depends on example gem version 2.3.x
        "other==1.2.3",          // project ALSO depends on other gem version 1.2.3
        "lmbrcentral",           // if no version specifier, use latest version compatible with project's engine
        ...
    ],
    ...
}

⚠️ UPDATE 1/27/2023 project.json and user/project.json

  1. A new optional local-only file <project>/user/project.json can be used to override project.json settings locally. These properties can be set using o3de edit-project-properties --user. See O3DE CLI --user option changes for details.
  2. A new field engine_finder_cmake will be contain the relative path to the appropriate .cmake file that will be used to find the engine for the project. Currently this file is hardcoded to be cmake/EngineFinder.cmake but we need the flexibility to easily update and revert this logic for future engines and projects.
  3. A new field engine_path can be used to specify the path to the engine. The path may be local or relative if in user/project.json but may only be relative in the shared project.json. This field is provided for users to explicitly set the path to the engine, which is especially useful if they have multiple copies of an engine with the same name and version.

gem.json will now include a version field and fields for engine and gem dependencies. It will also have a compatible_engines field as a simple way for gem maintainers to indicate known good versions of the engine their gem is compatible with.

{
    "gem_version":"0.0.0",       // default gem version is 0.0.0
 
    "compatible_engines": [      // if empty (default) or missing, the gem is assumed compatible with every engine
        "o3de>=0.0.0"  ,         // gem is compatible with any version of an engine named o3de
        "o3de-sdk>=2.0.1, <=3.1.0"   // gem is ALSO compatible with o3de-sdk engine versions from 2.0.1 up to 3.1.0
    ],

    "engine_api_dependencies": [ // optional declaration of dependency on engine api versions, default is empty
        "framework~=2.0.0"
    ],
  
    "gem_dependencies": [        // rename "dependencies" to "gem_dependencies" and add version specifiers
        "AWSCore>=1.0.0"         // NEW version specifiers added
    ],
    ...
}

CLI tool changes

The o3de.py CLI tool will have the following updates:

  1. Whenever the user attempts to make a change that violates the dependency rules, the action will fail. A --force param can be used to force the action.
  2. Edit functionality for projects and gems will be updated to allow manipulation of the new fields
  3. Gem and project registration will take into account dependencies and their versions
  4. Enable/Disable gem functionality will take into account dependencies
  5. Newly enabled gems will appear in the project.json gem_dependencies field using a version specifier of
    1. no version specifier
      1. if the gem has no version information or
      2. is a gem that is shipped with the engine or
      3. the user selects the option to always use the latest gem that is compatible with their project's engine
    2. == <gem version> if the gem has version information and the user selects the option to use a specific gem version
  6. When gems are downloaded, the appropriate version will be downloaded and put in versioned folders.

⚠️ UPDATE 1/27/2023 o3de.py CLI changes

  1. A --user option will be added to register, edit-project-properties and enable-gem/disable-gem to use the user/project.json and manipulate it. If --user CLI operations fail the command does not fall back to just use the project.json.
  2. An upgrade command will be added that will be used to perform project upgrades. Initially this command will compare the project's engine version and the current engine version and execute Python commands to upgrade project files, outputting the list of files changed, where backups were stored, and letting the user know that, if they're using source control, they should check these files in.

Project Manager changes

The Project Manager tool will have the following updates:

  1. Version information will be shown. For engines with display version information, display the "engine_display_version" data, otherwise display the "engine_version".
  2. Project and Gem workflows will be updated to take into account version information and surface issues to the user.
    1. users will be able to select the version of a gem they want to use with their project, but the Project Manager will attempt to determine and recommend the most compatible gem
    2. appropriate UX (likely a warning) will be displayed when a user attempts to enable a gem or compile with a gem that may be incompatible with their engine or other gems
  3. When gems are downloaded, the appropriate version will be downloaded and put in versioned folders.

CMake changes

The following changes will be made to the CMake build scripts:

  1. CMake scripts will be updated to surface the version information from each .json file as #defines that can be used by code in one of the following formats:

    ENGINE_VERSION_<MAJOR/MINOR/PATCH>
    ENGINE_<engine API>_API_VERSION_<MAJOR/MINOR/PATCH>
    <PROJECT/GEM>_<project name/gem name>_VERSION_<MAJOR/MINOR/PATCH>

    • Example 1: a gem named example with version 1.2.3 would have the following defines made available:
    GEM_EXAMPLE_VERSION_MAJOR 1
    GEM_EXAMPLE_VERSION_MINOR 2
    GEM_EXAMPLE_VERSION_PATCH 3
    • Example 2: a core engine API named framework with version 2.3.0 would have the following defines:
    ENGINE_FRAMEWORK_API_VERSION_MAJOR 2
    ENGINE_FRAMEWORK_API_VERSION_MINOR 3
    ENGINE_FRAMEWORK_API_VERSION_PATCH 0
  2. CMake scripts will be updated to take into account gem versions as well as gem names when determining correct sub-directories

⚠️ UPDATE 1/27/2023 EngineFinder CMake changes

  1. The CMakeList.txt in projects will be updated to read the project.json and then the user/project.json for the "engine_finder_cmake" entry and then include() that path.
  2. The EngineFinder.cmake file will be updated to check project <-> engine compatibility.

Content updates

  1. Gems provided with the engine will be updated with default versions 1.0.0
  2. AutomatedTesting and other sample projects will be updated with default versions 1.0.0
  3. Gem and project templates will be updated with the new fields.
  4. Gem providers will be notified so they can update their projects and gems.
  5. Documentation will be updated to reflect the version changes.

What are the advantages of the feature?

  1. Provides more granular versioning beyond individual releases
  2. Deters (but does not prevent) users from accidentally enabling incompatible gems in their projects, or incompatible projects in their engines.
  3. Allows gem creators to specify ranges of engine versions or engine library versions that gems are compatible with
  4. Allows projects to be incrementally upgraded in development branches instead of per engine release version

What are the disadvantages of the feature?

  • Maintaining version numbers is now a developer burden. Developers must understand when to update engine and gem versions and dependency information.
  • Old engines have the O3DEVersion field in engine.json that may cause confusion while those engines are in use.

How will this be implemented or integrated into the O3DE environment?

This does not require any new libraries. It will expand on both C++ source code and o3de python CLI code.

Are there any alternatives to this feature?

Two other similar approaches were considered but not selected due to their reliance on git and not all customers will use git as source control.

Storing the version information in .h (header) files does not satisfy the requirement of being able to determine version information when the source isn't available.

  1. GitHub Tags
    Utilize the git tag system to mark commits that increment version.

    Pros:

    • Does not require editing a file

    Cons:

    • Requires access to git api from scripts or a minimal clone of the repo with the tags
    • Per repo, so the tags must be configured when merging from a fork and will be different
    • May require some form of approval to create new tags for a version
    • This versioning only works for customers using git for source control
  2. Version Per Commit
    Each commit is considered a separate incremental version.

    Pros:

    • Automatic versioning
    • Does not require editing a file

    Cons:

    • Requires access to git API from scripts checking this
    • Commits can be squashed or merged from other branches changing hashes so they are not stable until in development
    • This versioning only works for customers using git for source control
    • Customers will have to use a version specifier to depend on a commit hash which isn't numerical for readability or for discovering where the engine is at in development.
      For example, a customer would need to depend on commit >=<sha1 hash> but o3de>=23e38520e9f could be after commit hash o3de>=a086bcbe0

How will users learn this feature?

  1. The version of the engine along with the current version of projects as well as the supported versions for a gem will all be displayed within the Project Manager for users to see.
  2. Upgrading projects will be added as a feature utilizing this system and automatically detecting potential upgrades and presenting them to the user.
  3. Engine, project and gem version fields will be visible in the Project Manager and in the o3de CLI.
  4. Documentation will be updated for developers to know more about how to use the versioning system.

Are there any open questions?

  1. What are the best defaults to use for project and gem versions, would something be better than 1.0.0?
  2. What are the best version specifiers to use for gem_dependencies when enabling a gem in a project?
  3. What is the best way to update the engine_version?
  4. What are the best API groupings to use for the engine_api_versions list - are there better ones than those proposed?

Examples

Project using latest version of gems for an engine

When you intend to use the latest versions of gems that are compatible with whatever engine your project uses, you would leave the version specifier portion of the gem_depencencies blank and provide an engine_name and engine_version. This will likely be common for teams that use the pre-built SDK.
The following example project will attempt to use the latest version of PopcornFX and KytheraAI gems that are compatible with the o3de-sdk engine with version 1.2.3, and because the LmbrCentral gem is provided with the engine, no version specifier is necessary.

project.json

{
    "project_name":"Example",
    "engine_name":"o3de-sdk",
    "engine_version":"1.2.3",
    "gem_dependencies": [
        "LmbrCentral",
        "PopcornFX",
        "KytheraAI",
    ],
}

Project requiring specific gem versions

Game studios with engineers dedicated to engine integrations may prefer to download all the necessary gems for a project and check them into source control and specify the exact version to use to discourage non-engineers from using other versions that may not be approved.
In this example, the project specifies a custom internal engine and version, and the exact versions of the PopcornFX and KytheraAI gems to use. The LmbrCentral still does not need a version specifier because it is always provided with the engine, but the team could provide a version specifier if desired for consistency.

project.json

{
    "project_name":"Example",
    "engine_name":"o3de-internal",
    "engine_version":"1.0.1",
    "gem_dependencies": [
        "LmbrCentral",
        "PopcornFX==1.2.3",
        "KytheraAI==2.3.4",
    ],
}

Gem compatible with engine release versions

When you intend to provide a gem that only needs to be compatible with released engine versions you can specify those exact versions in the compatible_engines field. You will not be able to use a range because that would include development engine versions.
The following example declares this gem is compatible with engines version 1.2.3 and 2.3.4
Gems\Example\1.2.3\gem.json

{
    "gem_name":"Example",
    "compatible_engines": [
        "o3de-sdk == 1.2.3",
        "o3de-sdk == 2.3.4",
        "o3de == 1.2.3",
        "o3de == 2.3.4"
    ],
}

Gem depending on specific APIs

When you intend to provide a gem that should be compatible with any future engine so long as a core API doesn't change you can use the engine_api_dependencies and gem_dependencies fields .
The following example shows a gem that should be compatible with all versions of the engine containing the framework API major version 2
Gems\Example\1.2.3\gem.json

{
    "gem_name":"Example",
    "engine_api_dependencies": [
        "framework ~= 2.0.0"
    ],
}

RFC: Improve Developer iteration workflow

The O3D Engine suffers from slow developer iteration times as it relates to building the C++ source code base into a completed runnable target(Executable or Shared Library)

Iteration Improvement Areas

  • Convert AzCore, AzFramework and AzToolsFramework into shared libraries(.dlls, .so, .dylib) in non-monolithic TARGETS
    Having the Core libraries built as dlls, would help with reducing link times of each of these libraries to Applications and Gem modules.
    Furthermore it would enforce a better public and Private API that engine developers must adhere core libraries features to the Projects and Gems.
  • Split AzCore, AzFramework and AzToolsFramework into smaller libraries, based on usable roles.
    For example AzCore contains a Math directory where most of the math related functions and structures are located(https://github.com/o3de/o3de/tree/development/Code/Framework/AzCore/AzCore/Math).
    Being able to have a defined target such as O3deMath would allow downstream targets to have more fine grained approach for selecting which libraries to depend on.
    This would also reduce the amount of overall code that needs to be rebuild when these smaller libraries are changed
  • Reduce the number of core gems that reside in the O3DE repo
    Currently O3DE has over 80 Gem directories in it's main repository, that would build whenever as part of a build solution whenever the 'all' target is built.
    Many developer don't build specific targets, so they have a tendency to overbuild content within the engine.
    Reducing the number of core Gems within the main repo and gems that build with Continuous Integration(CI) would help with builds times (https://github.com/o3de/o3de/tree/development/Gems).
    At the bottom of the Public API/Private API split RFC details a a set of potential Gems that can be moved from the O3DE core repo #37 (comment)
  • Form solutions to reduce build times in high impact areas, with many complex compile time constructs such as templates.
    The primary areas of the O3DE Codebase that needs a better build time solution are the SerializeContext, BehaviorContext, EditContext, EBus Component, AzTypeInfo RTTI system
  • Tackle the ScriptCanvas files with high compile time issue where it uses templated code to reflect function pointers to ScriptCanvas Nodes.
    The amount of time that is spent in compiling ScriptCanvas nodes are massive and have huge impact on binary sizes as well
    As can be seen as follows, the ScriptCanvas and ScriptCanvas Editor static libraries are among the 2 largest libraries as it relates to binary size in O3DE.
  • Reachout to @o3de/sig-content to determine how to remove the majority of slice code. There is still a need for a bit of the slice to still use with UI Canvas files, but a some of that code was duplicated and therefore should be simpler to remove.
    But shared code slice code that is being used by UICanvas might need to be moved to the LyShine Gem

Proposed RFC Feature: No-Code Projects

Summary:

The ability of O3DE to have a "No-Code" project

What is the relevance of this feature?

Creating a No-Code project would allow users to immediately enter the Editor for their project, work with Script Canvas and LUA components, and evaluate the engine without having to download gigs of 3rd party libraries or mess with compilers. It would remove a significant barrier to being able to start working with O3DE.

Feature design description:

This RFC is created to spark interest and help find alternatives to various pieces of the implementation. Prototypes for some aspects are already completed to prove that this works. This project can be delivered in steps, each of which provide more functionality.

From a user-journey point of view (UX TBD), an example of the FULL implementation would be something like

  1. User installs O3DE from an installer
  2. User runs the O3DE Project Manager
  3. User clicks "CREATE NEW PROJECT". A "Create project" screen appears (currently shows templates). One of the templates they can choose from is the no-code project, which has a description that you don't need to have a compiler or anything to use, and it is a quick way to get into the engine and try it out for yourself.
  4. User selects No-Code Project template, and chooses a name and location to save it.
  5. The Project Manager instantiates the template, and the new project appears in the list of projects. The user can now open the project and immediately go into the editor. It doesn't require building (or, if it does, does so automatically and without needing a compiler).
  6. Inside the editor, the "create shippable game" option uses a generic game launcher (in non-monolithic mode) to deploy a copy of their game.
  7. Users can use the Project Manager to add code to their project at a later time, or migrate the assets into a code project if they want to take that step.

Technical design description:

O3DE already actually allows projects to launch without any compile or build step. The way the Editor or game runtime starts up, is that they read a JSON file which contains the list of shared libraries (dlls) to load. They then load those libraries and start up. The editor and runtime and tools like Asset Processor do not require CMake, only those files which contain the list of module plugins to load, and it is not a requirement that the game project being launched actually has one of its own.

Unfortunately, right now, that "list of dlls to load" is generated by CMake when you 'build' the project. This is why a project build is currently required even if using the pre-built installer version of O3DE and even if your project has no dlls of its own. To generate that file that tells the editor what shared libraries to load. If that file can be generated by some other means or shipped, the entire build step is not necessary.

You can currently make a prototype no-code project yourself by creating a project and then eliminating the project's own code targets and dlls from that JSON file.

The proposed feature implementation is this (See alternatives below):

  1. Update the official format of project.json to be able to specify the attribute "no-code" : True. This will allow python, cmake, and other tools to know that the project is a no-code project, without having to run CMake to find that out.
  2. Modify the engine-finder template to look for this tag and set a global O3DE_NOCODE : True.
  3. Modify the default project template to specify project( projectname NONE) if O3DE_NOCODE is True.
  4. Modify the o3de cmake scripts to skip parts related to compiling or fill in defaults if O3DE_NOCODE is True. For example, ly_add_target can create a custom target that does nothing. This would also allow gem authors to take special steps if desired in nocode situations.
  5. Create a No-Code project template that sets that attribute to true and does not include any code targets from the project itself and has a copy of the registry settings files that tell it what to load on startup.

At this point, we have a basic no-code project that can instantly be templated and start the editor with no complaint from Project Manager. However, there are several drawbacks and disadvantages to doing this this part instead of going further.

Firstly, the no-code project template would have a 'frozen' set of gems activated. You cannot modify this list, as the information here is captured from the list of shared libraries to load which was copied instead of generated.
Secondly, you cannot create final actual game deliverables using this template, as there is no game executable to use.
However, there is a benefit to doing these 5 steps above - because it still hooks into the existing engine cmake, it is possible to modify the engine in subsequent engine versions to provide more and more of the above functionality. These above 5 steps could be completed quickly in order to ship a "demo project" mode that you can quickly use to open the editor and play around, even if you can't build a final game with it or change active gems.

To overcome the "no gems can be changed" problem, further changes are required:

  1. Modify the logic in the engine build code to generate a map of (gem name --> dlls to load) as a separate file that can be shipped in the installer
  2. Modify the No-code logic in the engine build code to use this map in no-code mode to generate the list of actual dlls to load.
    (alternatively, modify the actual code in the bootstrap of the engine code that looks at these mapping files instead)

This would allow gems to be modified evne in no-code projects. You would have to select from already pre-built gems though, like the ones that come with the installer, or, any 3rd party gems that come with pre-built modules. You would not be able to use gems that only ship as code.

To overcome the "you cannot build an actual game runtime" problem, further changes are required:

  1. Create a default game launcher project that is part of the build when creating an install. This default game launcher executable will just launch the game and execute autoexecs and use data (json or xml) to control what splash screens look like, initial stage to load, that sort of thing.
  2. Modify the "build and deploy" scripts to check for no code project, and if so, use this game launcher.

Are there any alternatives to this feature?

  • I prototyped some more quick and dirty hacks to this feature, such as eliminating all cmake files entirely instead of using no-code cmake files, but they caused problems with expansibility and kind of lead to a dead end where you can NEVER really allow gems to be switched or use 3rd party gems with prebuilts.

How will users learn this feature?

  • I recommend featuring it prominently (potentially even as the default) when you click CREATE PROJECT in the project manager.

Are there any open questions?

  • Thoughts about this approach, how scope can be reduced, how this can be shipped in pieces, who's doing some of this (Nick L is currently volunteering to do some of it, but its a big task to do the whole 9 yards...)
  • How it would work with the "build a game" macro system that makes your shipped game
  • If theres a way we could make this modular and still work with 3rd-party prebuilt gems.

One suggestion for making it work with 3rd-party prebuilt gems is to make it so that map of "gem name to dlls it needs to load" is separated into a different file (or different section in a json regset file) for each gem. This would allow the regset system to automatically handle this, because it merges the registries of active gems. But it also merges the registries of the entire engine. So it may be better to separate these into specific regset files that are keyed off the gem names that are active instead, and make it so that the boot loader of the engine sets the tags for each gem active in the project beofre it tries to merge registries. This also means that Gem ZZZZ could include a registry folder in its prebuilt version that contains modulemap.gem_active_zzzz.regset and have it automatically be applied to anything that specifies gem_active_zzzz in its list of active tags for registry combining.

Proposed RFC Feature per-engine recently used project path

Summary:

When launched without a project path parameter, O3DE tools (Editor, Asset Processor etc.) will attempt to use the most recently used project for that engine. These paths will be stored in a .setreg file in .o3de/Registry or inside the engine in user/Registry.

What is the relevance of this feature?

Currently, when a user directly runs tools like Editor.exe or AssetProcessor.exe without specifying a project path, and the these tools are not inside a project folder, they will launch the Project Manager (o3de.exe) from which the user can select the project they want to use. There are a few misses here:

  1. Needlessly asking the user what project they want to use for a specific tool introduces unnecessary friction and time lost.
  2. Some tools cannot be launched from the Project Manager (yet), for example AssetProcessor.exe
  3. Users that ran the installer and only have a single project on their machine are confused that running the Editor desktop shortcut, or Editor.exe does not launch the only project they have and keeps opening Project Manager.
  4. Most past and present users that work on a team get their engine and single project from source control in a format where the engine and project are in separate folders, and typically include pre-built binaries outside the project folder so their use case is similar to the users that run an installer.

Feature design description:

From a user perspective there will be no UI change.

  1. When users launch the Editor from the Project Manager, it writes that project path for that engine to a .setreg file. Each engine will have its own list of recently used project paths.
  2. When the user attempts to run an O3DE tool (Editor.exe, AssetProcessor.exe etc) without specifying a project path, the tool will in-addition to the existing project path detection logic, look for this recently used project path.

NOTE: Users that distribute a custom engine and project via source control can already provide a default relative project path using a .setreg file in their engine's Registry folder, which is great for advanced users, but this recently used project feature will provide similar benefits to non-advanced users.

! 3/4/2022 NOTE: The technical design description will be updated in the comments section.  
! This original description will be here for now to see the original proposal.

Technical design description:

The most recent used project path will be written by Project Manager (o3de.exe) to a .setreg inside the .o3de folder

Option 1: .o3de/Registry/recent_projects.setreg

PRO CON
guaranteed writeable location may break if engine name changes
lives alongside other user config data
specific to user

Option 2: user/Registry/recent_projects.setreg

PRO CON
survives engine name changes not guaranteed writeable
shared by every user on machine

What are the advantages of the feature?

  • Less friction using O3DE tools for most users
  • Users get into the Editor faster, with fewer clicks
  • Fewer repetitive tasks
  • Tools that cannot be launched from the Project Manager, can be launched without specifying a project path on the command line

What are the disadvantages of the feature?

  • in rare cases where the user does not want to use a recent project, they will need to manually launch the Project Manager (o3de.exe)
  • if there is a problem opening the tool because of some misconfiguration or runtime error, the user will need to manually launch the Project Manager

How will this be implemented or integrated into the O3DE environment?

  • The Project Manager will be updated to write out the recently used project path when the Editor is launched
  • AzFramework::ProjectManager::CheckProjectPathProvided which is used by the Editor, is updated to load this .setreg and key
  • AzCore::Util::GetProjectPath, which is used by the AssetProcessor and other tools, is updated to load this .setreg and key

Are there any alternatives to this feature?

  • The o3de CLI supports a global default project, but it must be manually set, it is not engine specific and there is no GUI to set it.

How will users learn this feature?

  • Users should not need to learn about this feature, it should just work.

Are there any open questions?

  • How do we best handle the situation where a tool cannot be launched because the most recently used project is misconfigured?

Proposed SIG-Core meeting agenda Date/Time: 2/DEC/2021 0900 PST/1700 GMT

Meeting Details

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

  • IMGUI currently doesn't have an owner. While @o3de/sig-graphics-audio own the rendering side, they don't intend to own the integration or updates. Can @o3de/sig-core take ownership of it?
  • Localisation: @sptramer would like to discuss the underlying issue behind o3de/o3de#5983
  • Nomination of user @nemerle to become a reviewer.

Please comment on additional agenda items in the comments

SIG-Core Chair/Co-Chair Nominations 12/1 - 12/8 -- Elections 12/8 - 12/15

SIG chair / co-chair elections for 2022

Since the inception of O3DE, each SIG chair has been staffed as an interim position. It's time to hold some official elections, following some of the proposed guidance but with our own process due to the holiday season and in order to expedite the elections into next year.

The chair / co-chair roles

The chair and co-chair serve equivalent roles in the governance of the SIG and are only differentiated by title in that the highest vote-getter is the chair and the second-highest is the co-chair. The chair and co-chair are expected to govern together in an effective way and split their responsibilities to make sure that the SIG operates smoothly and has the availability of a chairperson at any time.

Unless distinctly required, the term "chairperson" refers to either/both of the chair and co-chair. If a chair or co-chair is required to perform a specific responsibility for the SIG they will always be addressed by their official role title.

In particular, if both chairpersons would be unavailable during a period of time, the chair is considered to be an on-call position during this period. As the higher vote-getter they theoretically represent more of the community and should perform in that capacity under extenuating circumstances. This means that if there is an emergency requiring immediate action from the SIG, the chair will be called to perform a responsibility.

Responsibilities

  • Schedule and proctor regular SIG meetings on a cadence to be determined by the SIG.
  • Serve as a source of authority (and ideally wisdom) with regards to O3DE SIG area of discipline. Chairpersons are the ultimate arbiters of many standards, processes, and practices.
  • Participate in the SIG Discord channel and on the GitHub Discussion forums.
  • Serve as a representative of the broader O3DE community to all other SIGs, partners, the governing board, and the Linux Foundation.
  • Represent the SIG to O3DE partners, the governing board, and the Linux Foundation.
  • Coordinate with partners and the Linux Foundation regarding official community events.
  • Represent (or select/elect representatives) to maintain relationships with all other SIGs as well as the marketing committee.
  • Serve as an arbiter in SIG-related disputes.
  • Coordinate releases with SIG Release.
  • Assist contributors in finding resources and setting up official project or task infrastructure monitored/conducted by the SIG.
  • Long-term planning and strategy for the course of the SIG area of discipline for O3DE.
  • Maintain a release roadmap for the O3DE SIG area of discipline.

Additionally, at this stage of the project, the SIG chairpersons are expected to act in the Maintainer role for review and merge purposes only, due to the lack of infrastructure and available reviewer/maintainer pool.

... And potentially more. Again, this is an early stage of the project and chair responsibilities have been determined more or less ad-hoc as new requirements and situations arise. In particular the community half of this SIG has been very lacking due to no infrastructural support, and a chairperson will ideally bring some of these skills.

Nomination

Nomination may either be by a community member or self-nomination. A nominee may withdraw from the election at any time for any reason until the election starts on 12/3.

Nomination requirements

For this election, nominees are required to have at minimum two merged submissions to http://github.com/o3de/o3de (must be accepted by 2022-01-31). This is to justify any temporary promotion to Maintainer as required by this term as chairperson. Submissions may be in-flight as of the nomination deadline (2021-12-08 12PM PT), but the nominee must meet the 2-merge requirement by the end of the election or they will be removed from the results.

Any elected chairperson who does not currently meet the Maintainer status will be required to work with contributors from the SIG to produce an appropriate number of accepted submissions by January 31, 2022 or they will be removed and another election will be held.

The only other nomination requirement is that the nominee agrees to be able to perform their required duties and has the availability to do so, taking into account the fact that another chairperson will always be available as a point of contact.

How to nominate

Nominations will be accepted for 1 week from 2021-12-01 12:00PM PT to 2021-12-08 12:00PM PT.
Nominate somebody (including yourself) by responding to this issue with:

  • A statement that the nominee should be nominated for a chair position in the specific SIG holding its election. Nominees are required to provide a statement that they understand the responsibilities and requirements of the role, and promise to faithfully fulfill them and follow all contributor requirements for O3DE.
  • The name under which the nominee should be addressed. Nominees are allowed to contact the election proctor to have this name changed.
  • The GitHub username of the nominee (self-nominations need not include this; it's on your post.)
  • Nominee's Discord username (sorry, but you must be an active Discord user if you are a chairperson.)

Election process

The election will be conducted for one week from 2021-12-08 12:00PM PT and 2021-12-15 12:00PM PT and held through an online poll. Votes will be anonymous and anyone invested in the direction of O3DE and the SIG holding the election may vote. If you choose to vote, we ask that you be familiar with the nominees.

If there is a current interim chair, they will announce the results in the Discord sig channel as well as the SIG O3DE mailing list no later than 2021-12-17 1:00PM PT. If there is no interim chair, the executive director will announce the results utilizing the same communication channels. At that time if there is a dispute over the result or concern over vote tampering, voting information will be made public to the extent that it can be exported from the polling system and the SIG will conduct an independent audit under the guidance of a higher governing body in the foundation.

The elected chairpersons will begin serving their term on 2022-01-01 at 12AM PT. Tentatively SIG chairs will be elected on a yearly basis. If you have concerns about wanting to replace chairs earlier, please discuss in the request for feedback on Governance.

Add a C++ Best Practice page to governance

@santorac Has reached out looking to add the C++ Best Practice Page to the public github o3de/sig-core repo for governance.
The C++ Best Practices Page is a set of guidelines that developers can use(but is not required to use) to lookup previous solutions to problems found when using O3DE. The guidance is meant to help developers avoid pitfalls with some O3DE specific constructs.

There is a Lumberyard Best Practices page, that can perhaps be used as a base for creating a full Best Practice page.

A sanitized version of that page has been attached to this issue.
C++-Best-Practices-Guide.md

RFC for Core Charter

Please read and provide feedback on our charter in this thread:

SIG Core Charter

This charter adheres to the Roles and Organization Management specified in "<sig-governance.md>".

Team information may be found in the "<readme.md>"

Overview of SIG

Two concise lines explaining what this SIG does with bullet points of the major responsibilities

  • Responsibility 1

Goals

  • Major goals that SIG seeks to generally achieve

Scope

  • Responsible for the keybind and controller framework system

  • Design and implement localization framework for editor and project runtime.

  • Publish and maintain localization data format structure

  • Maintain behavior context, edit context, serialization contexts, and code reflections frameworks and systems.

  • Create framework for exposing profiling and metrics data that can be collected

  • Maintain generic node based scripting framework and data representation model

  • Maintain asset catalog, asset processor, and builder systems and framework

  • Maintain Packaging artifact and catalog system

  • Design and Maintain asynchronous loading stream system

  • Maintain Prefab system

  • Maintain Core system libraries AZCore, and AZFramework libraries

  • Maintain Editor python bindings framework

  • Maintain Physics API, and integration of physics related gems

  • Maintain EmotionFX and Animation systems

  • Maintain Logging and Trace systems and frameworks

  • Publish and maintain list of use case examples for each subssystem of AZCore.

Generalized overall scope of work

In scope

Cross-cutting Processes

  • Support and collaborate with all SIGs in relation to changed and updates to underlying frameworks
  • Publish procedure for the intake of requests from SIGs in relation to changes and needs to core systems
  • Provide consultation, discovery, and guidance for new feature support brought forth by other SIGs
  • Publish and matinain best practices, usability examples, and feature documentation.

Out of Scope

  • Not responsible for building bespoke solutions to meet individual needs, but may be consulted with from time to time without obligation.

SIG Links and lists:

  • Joining this SIG
  • Slack/Discord
  • Mailing list
  • Issues/PRs
  • Meeting agenda & Notes

Roles and Organization Management

SIG Docs adheres to the standards for roles and organization management as specified by . This SIG opts in to updates and modifications to

Individual Contributors

Must provide a report of performance and blast radius impact of direct and indirectly affected systems

Additional information not found in the sig-governance related to contributors.

Maintainers

Additional information not found in the sig-governance related to contributors

Additional responsibilities of Chairs

Additional information not found in the sig-governance related to SIG Chairs

Subproject Creation

Additional information not found in the sig-governance related to subproject creation

Deviations from sig-governance

SIG will elect 4 Chairmembers due to vast interaction across all SIGs

Explicit Deviations from the sig-governance

Create Roadmap View

Hi! SIG release is working to help the O3DE community to get visibility of the O3DE roadmap. Full context can be read here: o3de/sig-release#79.

In order to achieve that goal, we need your help to create the roadmap for your SIG by February 6th, 2023, and start giving brief updates about the roadmap items at the Joint TSC meeting on February 28th, 2023. Instruction to create the roadmap view can be read in the RFC section "Roadmap Review in TSC monthly meeting".

Let me know if you have any concerns with the dates or any questions about the ask!

SIG-Core meeting agenda for 14-SEPT-2022

Meeting Details

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

SIG Updates

What happened since the last meeting?

Meeting Agenda

  • Elections for SIG chair and co-chair: Decide on a timeline and process to nominate and vote on the next two people to run the SIG for 12 months

Please comment below on any additional items you'd like to add to the agenda.

Discuss @AMZN-alexpete RFC for engine versioning #44

Discuss SIG-Core stance on Console Support and how it affects the C++ standard that O3DE uses

Discuss agenda from proposed topics

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Proposed SIG-Core meeting agenda for 2022-03-04

Meeting Details

  • Date/Time: Month Date, 2022 @ 5:00pm UTC / 12:00pm ET
  • Location: Discord SIG-Core Voice Room
  • Moderator: @lumberyard-employee-dm
  • Note Taker Volunteers needed

The SIG-Core Meetings repo contains the history past calls, including a link to the agenda, recording, notes, and resources.

Meeting day and time vote: https://doodle.com/poll/fkwaviicdqch3equ?utm_source=poll&utm_medium=link

SIG Updates

New SIG-Core chair: @amzn-pratikpa
New SIG-Core co-chair: @lumberyard-employee-dm

Meeting Agenda

Outcomes from Discussion topics

Discuss outcomes from agenda

Action Items

Create actionable items from proposed topics

Open Discussion Items

List any additional items below!

Proposed RFC Feature : Python Relocation and Virtual Environments

Summary:

The download and installation of Python should be moved out of the engine and into a package outside of the engine root, and subsequent uses of Python should be done from a virtual environments instead of directly in the downloaded package.

What is the relevance of this feature?

Moving Python out of the engine path and into the LY_3RDPARTY_PATH with the other 3rd Party packages will provide the following benefits:

  • Makes the location of the Python package consistent with the rest of the downloaded 3rd Party packages.
  • Moving it out of the engine root makes the engine folder consistent with the github source. Python no longer needs to be explicitly added to the .gitignore file.
  • Installers that rely on the engine being immutable (i.e. SNAP) no longer need to download and package Python along with the rest of the engine binaries into the installer, reducing the size of the installer.

Python virtual environments provides a mechanism to isolate different packages and libraries for a specific application from other applications. Since virtual environments creates a small layer on top of the main Python package, the overhead is minimal. Switching O3DE to use Python virtual environments will add the following benefits:

  • The Python library can be shared across multiple copies of O3DE that is installed locally. The same Python version and revision will only be downloaded and unpackaged once.
  • The validation hash against the package will still be valid across the lifetime of the package. All additional modules that are installed via pip will be done in the virtual environment, the Python package will not be altered.

Feature design description:

This feature is a change to how O3DE uses Python, and the updates will have minimal impacts on development workflows that do not involve Python directly.

  • The current process to bootstrap Python will be updated to download and unpack Python in the common O3DE 3rd Party folder.
  • For each unique engine on the local system, a Python virtual environment will automatically be created for it.
  • The o3de-specific Python script (Python.cmd/Python.sh) will be updated to refer to Python in the virtual environment.
  • All targets that use the Python bindings 3rd Party library (ProjectManager, EditorPythonBindings) will be updated to use the virtual environment.

Technical design description:

Python 3rd Party Package
A new revision of the Python 3rd Party package is not necessary. The Python virtual environment module is part of the standard Python library. The pip modules are already part of the package as well. The handling of the 3rd Party Package will change in the following ways:

  • Currently the Python 3rd Party package is downloaded and extracted into <Engine Root>/Python/runtime/$PYTHON_PACKAGE_NAME where <Engine Root> is the location of the engine (either from github or an installer package). The location will change to $LY_3RDPARTY_PATH (inside of the $HOME/.o3de/%USERPROFILE%\.o3de folder)
  • The package currently discards the original downloaded compressed package after it has been expanded into its final location. This will change such that this package will be treated the same as all of the other 3rd Party Packages: The downloaded compressed package will be retained in the downloaded_packages folder, and the package will be validated against its hash during every cmake generation run.

Bootstrap Process

The initial bootstrap process for Python will be updated to include the creation of the Python virtual environment, and the subsequent O3DE specific installation of modules will use the venv instead of direct Python calls in the 3rd Party Package. The target location of the virtual environment will be unique to the engine path from where the bootstrap process is occuring. O3DE will use the full absolute path of the current engine to generate a reasonable unique identifier. This path will be deterministic based on the engine path.

Below is the current bootstrap flow for O3DE:

  1. Download and unpack the Python 3rd Party Package into $O3DE/Python/runtime if needed.
  2. Only perform the package validation on the first time download, not on subsequent cmake project generation calls. Prevent normal 3rd party package hash validation. This is due to the fact that the subsequent calls to pip into the 3rd Party package will alter the package contents, and thus the package hash will be different.
  3. Perform pip install using the O3DE Python script under $O3DE/Python to perform the installation of the following:
    • General packages defined in $O3DE/Python/requirements.txt
    • Packages from $O3DE/Tools/LyTestTools
    • Packages from $O3DE/Tools/RemoteConsole/ly_remote_console
    • Packages from $O3DE/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools

The updated bootstrap flow for O3DE will be:

  1. Download and unpack the Python 3rd Party Package into $LY_3RDPARTY_PATH if needed.

  2. Perform the standard package validation against the package hash.

  3. Calculate the full path ($PYTHON_VENV_PATH) to the Python virtual environment based on the absolute path of the engine. The full path will be:
    $HOME/.o3de/Python/$ENGINE_ID/ where $ENGINE_ID will be the first 8 hexadecimal digits of the SHA-1 hash of the absolute path of the current engine.

  4. Check if the $PYTHON_VENV_PATH path exists. If it exists, check the following scenarios:

    • Check if the expected Python binaries (by platform) exists.
    • Check if the expected pyvenv.cfg exists.
    • Check if a 3rd Party hash marker file .hash exists.
    • If the .hash exists, check if it matches the package hash of the 3rd Party Python package the virtual environment is meant for.
      If one or more of the above check fails, then check against a cmake variable O3DE_ERROR_ON_PYTHON_HASH_MISMATCH to determine if we want to clear out the venv and regenerate it, or just report a fatal error with instructions on how to re-generate the venv.
  5. If $PYTHON_VENV_PATH does not exist, or it was cleared out by the above validation checks, then perform the creation of the virtual environment based on the following command ($PYTHON_EXECUTABLE will be the Python executable inside the Python 3rd Party package, and its sub folder is platform specific. )
    $PYTHON_EXECUTABLE -m venv $PYTHON_VENV_PATH
    Once the environment is initialized to $PYTHON_VENV_PATH, write the package hash for the Python 3rd Party package to the folder to .hash.

  6. Perform pip install using the O3DE Python script under $O3DE/Python to perform the installation of the following:

    • General packages defined in $O3DE/Python/requirements.txt
    • Packages from $O3DE/Tools/LyTestTools
    • Packages from $O3DE/Tools/RemoteConsole/ly_remote_console
    • Packages from $O3DE/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools
      The O3DE Python will be updated to use the Python virtual environment for the engine instead (see 'Python script updates')

Python PAL-ification

The current Python bootstrap script does not employ the Platform Abstraction Layer (PAL) pattern for the Python packages since it is used in both generation and script modes. In script mode, the main LYPython.cmake does not have access to many of the PAL variables that is available during the generation mode, so it currently does a manual check against the current platform (and architecture) to determine the package name, hash, etc.

In order to refactor the script to follow the PAL pattern, the PAL variables that are needed will instead be initialized by the current get_python.* scripts instead. get_Python.bat is guaranteed to run on Windows only, so the windows specific PAL variables can be hardcoded in that script. get_Python.sh is valid for both Linux and Mac, so platform detection (as well as architecture detection for Linux) will be handled there. Since this is a BASH script, it can detect the platform by using the $OSTYPE environment variable.

These get_Python.* scripts subsequently calls the get_python.cmake, which provides the necessary PAL* related variables needed and passes it to LYPython.cmake to run through the same bootstrap process as a cmake project generation workflow.

The platform specific information for the Python package will be moved to PAL'ified files cmake/3rdParty/Platform/{$PLATFORM_NAME}/Python_{$PLATFORM_LOWER}.cmake.

Python Script Updates

Since PAL-ification is handled in the either the get_Python.* scripts or part of the cmake generation workflow, the location of the Python virtual environment for the engine will be set to a known location based on the current engine rather than trying to detect the platform and the location of the actual Python 3rd Party package. The current Python.cmd/Python.sh will updated to generate the deterministic path to the Python virtual environment $PYTHON_VENV_PATH by performing the same logic employed by the bootstrap workflow.

Instead of running Python from the 3rd Party package, it will instead run through the execution flow for virtual environments:

  1. Run the activate script within the venv to setup the proper environment
  2. Run the Python executable within the venv
  3. Run the deactivate script (Windows) to restore the environment

The activate/deactivate is necessary to set up and tear down the virtual environment. (Only activate is needed on BASH since it is running in its own shell)

Embedded Python Updates

The targets that depend on the Pybind 3rd Party Package (Project Manager and the Editor Python Bindings Gem) will also need to update its environment to use the virtual environment. The pyvenv.cfg file is only used when running the Python interpreter that is inside the virtual environment. With embedded Python, however, we will need to read in this file and set the PYTHON_HOME to the 3rd Party Python library. In addition to initializing the Python interpreter from pybind, we will need to scan the virtual environment's site-packages to look for *.egg-link files. These files tell the Python interpreter where to look for additional modules in other folders. Pybind11 has trouble interpreting these files, so we will need to work around this issue by scanning and manually adding the paths that are contained in these egg-link files into the `$PYTHON_PATH`` environment.

What are the advantages of the feature?

  • Removes the one-off location of the 3rd Party packaging for Python. Python is now treated the same as the other 3rd Party Packages.
  • Prevents changes / updates to the extracted Python 3rd Party package. This allows for package integrity hash checks beyond just the initial download.
  • Moving the package outside of the engine root helps the engine folder maintain consistency. The Python runtime can be removed from the source .gitignore file.
  • Linux SNAP packages rely on the installed engine to run from an immutable path/system. This made it impossible to have a SNAP installed version of O3DE download Python as needed. To get around this issue, the SNAP packaging process downloads and installs Python and all of its pip modules into the local Python/runtime folder, and then packages it in the SNAP container. The disadvantages of this are:
    • Larger snap packages to download.
    • Python resides in an immutable storage. It is not possible to add (pip-install) and new modules.
  • Cleaning the Python environment can be done by simply wiping out the generated virtual environment instead of removing the engine local runtime and re-downloading the entire package.
  • Multiple instances of the engine on the same machine can share the same Python 3rd Package (granted they are on the same version and revision). Their engine-specific packages will instead be stored in their own virtual environment.
  • Protects against any system PYTHONPATH injection when running Python. For instance, the ROS ecosystem installs its own Python and injects ROS-specific Python packages into PYTHONPATH.

What are the disadvantages of the feature?

  • The setup and teardown for the virtual environment for every Python call may incur a minor performance hit (as opposed to running Python directly)
  • There is no mechanism to clean up generated virtual environments other than manually deleting them.
  • Introduces risks of dangling virtual environments if the Python 3rd Party package is removed but the virtual environment is not.

How will this be implemented or integrated into the O3DE environment?

The implementation is described in the technical design description above. For source versions of the engine, the updated scripts will perform the bootstrap process as needed to setup the Python environment properly. The legacy Python runtime will still exist in the engine path and may be removed manually.

Are there any alternatives to this feature?

  • Python 3rd Party can be moved externally without using virtual environments
    We can skip using virtual environments and just directly follow the same scenario where the package content is updated directly. This will re-introduce the work around where package validation is only done once when the package is initially downloaded. This also means that the
    location of the 3rd party binary will no longer be deterministic for the engine and the path needs to be hardcoded or generated somewhere in
    the engine in order to access it.

  • Use alternatives to Python Virtual Environments:

    • Conda provides package environment management for any language.
    • Pipenv provides management for pip, venv for Python beyond the default support with venv.

    Python Environments was chosen because it is provided by default by Python is does not require additional packages/dependencies.

How will users learn this feature?

The bootstrap process will occur automatically, and all Python calls in O3DE currently are wrapped with O3DE python script files already. Users will learn of this update through the release notes and impactful change messaging from O3DE.

Are there any open questions?

  • What happens to the virtual environment when there is an update to the 3rd Party Python package? (ie security update)
    The workflow attempts to remedy this scenario by keeping the package hash within the generated virtual environment folder. If there a detected change in the hash, the bootstrap process will wipe out the virtual environment contents and force a generation of a new virtual environment based on the updated 3rd Party Python.

  • Why is the virtual environment stored in the .o3de folder?
    We need a location that is outside of the engine path, and the .o3de folder already manages all the 3rd Party packages. Alternatively the location could be stored in the $HOME/O3DE folder instead, but since it has a dependency on a specific 3rd Party Python package, it made more sense to place it there.

  • Will this solve individual projects from injecting python libraries globally through pip-install?
    The virtual environment scope is still at the engine level, not the project level, so this will not solve that issue. It does protect the 3rd Party Python site packages however since the project-injected python libraries are installed into the virtual environment, and not the 3rd Party Python packages.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.