# Re-core
Setting up Re-core is pretty easy, most of the work is making sure you have VM templates that it can use, check Re-pack on how to do just that.
# Setup
Its easy to get going just clone the repo and launch the REPL:
$ git clone git@github.com:re-ops/re-core.git
$ cd re-core
# tmux is required for password input
$ tmux new-session re-core
# Now start the REPL environment
$ lein repl
[re-core]λ: (go)
nil
Note: Re-core currently requires Amazon correto 8 (opens new window) JDK.
# Templates
Re-core clones templates in order to create new VM instances, we use Re-pack to create our templates which contain the minimal setup required for Re-ops to manage the instance:
- re-ops user for remote access management.
- authorized ssh-key (for automated access) under /home/re-ops/.ssh/authorized_keys
- JRE 8 (for re-gent)
# Configuration
The Re-core section contains the configuration options for Re-core hypervisors, Elasticsearch index, logging and queue path:
{
:re-core {
:queue-dir "/tmp/re-core-queue/"
:port 8082
:https-port 8443
:log {
:level :info
:path "re-core.log"
}
:hypervisor #profile {
:dev {
; check the Hypervisors section
}
}
}
}
Section | Property | Description | Comments |
---|---|---|---|
ports | port | Standard http port | Used for endpoint, used reverse proxy to secure |
log | level | Default logging level | Optional values include: trace, debug, info, error. |
path | Where the log file is store locally | ||
elasticsearch | host | The host Elasticsearch is running on | |
port | http API port (9200 by default) | ||
user | Elasticsearch user name | ||
pass | Elasticsearch password | ||
index | The index name that re-core will use | ||
ssh | private-key-path | Private ssh key path | Used to perform remote tasks over ssh |
# Hypervisors
Hypervisor configuration is located under the re-core section of re-ops.edn.
# AWS
AWS requires the following information under the re-conf/hypervisor/aws section in the configuration file:
{
:hypervisor {
:dev {
:aws {
:access-key ""
:secret-key ""
:ostemplates {
:ubuntu-18.04 {:ami "" :flavor :debian}
}
:default-vpc {
:vpc-id "vpc-123456" :subnet-id "subnet-123456" :assign-public true
}
}
}
}
}
Section | Property | Description | Comments |
---|---|---|---|
access-key | AWS access key | ||
secret-key | AWS API secret key | ||
ostemplates | Mappings between system os key to AMI and flavor (redhat or debian). | ||
default-vpc | vpc-id | The id of the VPC that will be used with EC2 instances. | |
subnet-id | The id of the subnet that will be used with EC2 instances. | ||
assign-public | Whether to assign a public IP or not. | If false then a VPN is used to access the internal VPC network. |
# Digitalocean
Digitalocean (opens new window) requires the following configuration under the re-conf/hypervisor/digital-ocean section in the configuration file:
:hypervisor {
:dev {
:digital-ocean {
:token ""
:ssh-key ""
:ostemplates {
:ubuntu-18.04 {:image "ubuntu-18-04 :flavor :debian}
}
}
}
}
Section | Property | Description | Comments |
---|---|---|---|
token | Digitalocean authentication token | ||
ssh-key | SSH key id in Digitalocean UI | Used for password-less access to droplets. | |
ostemplates | Mapping from OS key to its Digitalocean image | Check Re-pack on how to create a template |
# KVM
KVM (opens new window) requires the following information under the re-conf/hypervisor/kvm section in the configuration file:
:hypervisor {
:dev {
:kvm {
:nodes {
:remote {:username "ronen" :host "somehost" :port 22}
; must use localhost key for localhost
:localhost {:username "ronen" :host "localhost" :port 22}
}
:ostemplates {
:ubuntu-16.04 {:template "ubuntu-16.04" :flavor :debian}
}
}
}
}
Note: we used libvirt over SSH (using key based auth).
Section | Property | Description | Comments |
---|---|---|---|
nodes | username | SSH user name | Add your ssh key to /home/{user}/.ssh/authorized_keys |
host | KVM node host | ||
port | SSH port | ||
ostemplates | template | Template VM name | check Deploy |
flavor | OS flavor of the template | currently only :debian supported |
# LXC
LXC (opens new window) require the following information under the re-conf/hypervisor/lxc section in the configuration file:
:hypervisor {
:dev {
:lxc {
:auth {
:path #join [#env HOME "/.config/lxc"]
:p12 "certificate.p12"
:password ""
:crt "127.0.0.1.crt"
}
:nodes {
:localhost {
:host "127.0.0.1" :port 8443
}
}
:ostemplates {
:ubuntu-18.04.2 {:template "ubuntu-18.04.2_node-8.x" :flavor :debian}
}
}
}
}
Note: check how to add a remote LXD server following this guide (opens new window).
Section | Property | Description | Comments |
---|---|---|---|
auth | path | LXC client certificates path | Usually defaults to ~/.config/lxc |
p12 | Generated p12 client certificate | Check p12 generate | |
password | p12 certificate password | ||
crt | The remote server crt file | Created when adding a remote LXD node | |
nodes | host | Remote LXD node address | |
port | Remote LXD host port | ||
ostemplates | template | Container image name | |
flavor | OS flavor of the template | currently only :debian supported |
Generating p12 certificate
The p12 certificate is generated from the client.crt and host.crt files:
$ openssl pkcs12 -export -out certificate.p12 -inkey client.key -in client.crt -certfile servercerts/127.0.0.1.crt
Both files are generated after a new remote is added to our local LXC client:
$ lxc remote add 127.0.0.1