brainscore_core.submission.endpoints

Process plugin submissions (data, metrics, benchmarks, models) and score models on benchmarks.

Functions

_get_ids(args_dict, key)

call_jenkins(plugin_info)

Triggered when changes are merged to the GitHub repository, if those changes affect benchmarks or models.

get_user_id(email, db_secret)

make_argparser()

noneable_string(val)

For argparse

resolve_benchmarks(domain, benchmarks)

Identify the set of benchmarks by resolving benchmarks to the list of public benchmarks if benchmarks is ALL_PUBLIC :param domain: "language" or "vision" :param benchmarks: either a list of benchmark identifiers or the string ALL_PUBLIC to select all public benchmarks

resolve_models(domain, models)

Identify the set of models by resolving models to the list of public models if models is ALL_PUBLIC :param domain: "language" or "vision" :param models: either a list of model identifiers or the string ALL_PUBLIC to select all public models

resolve_models_benchmarks(domain, args_dict)

Identify the set of model/benchmark pairs to score by resolving new_models and new_benchmarks in the user input. Prints the names of models and benchmarks to stdout. :param domain: "language" or "vision" :param args_dict: a map containing new_models, new_benchmarks, and specified_only, specifying which the model/benchmark names to be resolved.

retrieve_models_and_benchmarks(args_dict)

prepares parameters for the run_scoring_endpoint.

send_email_to_submitter(uid, domain, ...)

Send submitter an email if their web-submitted PR fails.

shorten_text(text, max_length)

Classes

DomainPlugins()

Interface for domain-specific model + benchmark loaders and the score method.

RunScoringEndpoint(domain_plugins, db_secret)

UserManager(db_secret)

Retrieve user information (UID from email / email from UID) Create new user from email address Send email to user