Introduction HttpClient is the library to Get, Post, Put,.. and call WebAPIs and it is very important to use it correctly. This Library has some methods to call web apis which all of them are Async. It is really important to use them in async method but if you are using an Async method in a console application that does not have separate UI thread you can use it like the following code : public void MySyncMethod() { using (HttpClient client = new HttpClient()) { client.BaseAddress = new Uri( "https://google.com" ); var response = client.GetAsync( "SomeAddress" ) .ConfigureAwait( false ).GetAwaiter().GetResult(); if ( response .IsSuccessStatusCode) { //Do Something... } } } Using ConfigureAwait(false) is very important for the applications which does not have separate UI thread Creating HttpClient for each request is not a good Idea and Reuse Httpclient as much as possible
Installing Openshift origin 3.7 with NFS storage
The first and for most is planning your architecture
In this installation I will try to install 1 master and 3 nodes. your master can use as a node so we will install 1 master and 3 node (use master as a node).
do not forget to register an DNS. in this example I will use sample.com
1- Install and run 3 machine with CentOS 7.x
2- set the hostnames of 3 machine
3- forward following domains to the machines
4- generate ssh key in master and copy to all servers as you need to ssh to all machine without password by Ansible
The first and for most is planning your architecture
In this installation I will try to install 1 master and 3 nodes. your master can use as a node so we will install 1 master and 3 node (use master as a node).
do not forget to register an DNS. in this example I will use sample.com
1- Install and run 3 machine with CentOS 7.x
2- set the hostnames of 3 machine
# hostnamectl set-hostname master.sample.com # hostnamectl set-hostname node1.sample.com # hostnamectl set-hostname node1.sample.com
3- forward following domains to the machines
*.sample.com => master*.app.sample.com => masternode1.sample.com => node1node2.sample.com => node2
4- generate ssh key in master and copy to all servers as you need to ssh to all machine without password by Ansible
# ssh-keygen -t dsa # cat .ssh/id_dsa.pub * copy the keys to all machin in .ssh/authorized_keys file
6- install fallowing packages in all machines
# yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
# yum update
# yum install atomic
# yum -y install \ https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo
# yum -y --enablerepo=epel install ansible pyOpenSSL
7- Clone the openshift/openshift-ansible repository from GitHub, which provides the required playbooks and configuration files:(On Master)
# cd ~ # git clone https://github.com/openshift/openshift-ansible # cd openshift-ansible
git checkout origin/release-3.7
git pull
8- run the following steps
# atomic host upgrade # yum install docker-1.12.6 # rpm -V docker-1.12.6
9-edit docker-storage and set DOCKER_STORAGE_OPTIONS= -s overlay2
# vi /etc/sysconfig/docker-storage
10- restart the docker like this:
# systemctl stop docker # rm -rf /var/lib/docker/ # systemctl enable docker # systemctl start docker
11- install ntp for time
# yum install ntp # systemctl restart ntpd
12- finally you need to install java in master node
# yum install java-1.8.0-openjdk-headless
13- Add a folder for nfs to the master
# mkdir /exports
# yum install nfs*14- open a hosts file for ansible configuration
# vi /etc/ansible/origin_hosts
and put the following lines in this file:
[OSEv3:children] masters nodes etcd nfs [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=origin openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] containerized=true openshift_docker_use_system_container=True openshift_release=v3.7.0 openshift_image_tag=v3.7.0 openshift_install_examples=true enable_docker_excluder=false enable_openshift_excluder=false #osm_etcd_image=registry.fedoraproject.org/f26/etcd osm_use_cockpit=true osm_cockpit_plugins=['cockpit-kubernetes'] openshift_master_cluster_method=native osm_custom_cors_origins=['.*'] #osm_default_node_selector='purpose=work' openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability openshift_master_cluster_hostname=occonsole.sample.com openshift_master_cluster_public_hostname=occonsole.sample.com openshift_master_default_subdomain=apps.sample.com openshift_docker_options='--selinux-enabled --insecure-registry 172.30.0.0/16' openshift_clock_enabled=true openshift_master_api_port=443 openshift_master_console_port=443 openshift_hosted_router_replicas=1 openshift_enable_service_catalog=false ####### openshift_hosted_manage_registry=true openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi openshift_hosted_registry_replicas=1 openshift_hosted_registry_deploy=true openshift_logging_install_logging=true openshift_hosted_logging_enable_ops_cluster=True openshift_logging_use_ops=true openshift_logging_kibana_hostname=kibana.apps.sample.com openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_nfs_directory=/exports openshift_logging_storage_nfs_options='*(rw,root_squash)' openshift_logging_storage_volume_name=logging openshift_logging_storage_volume_size=10Gi openshift_logging_storage_labels={'storage': 'logging'} openshift_hosted_logging_deploy=true openshift_metrics_install_metrics=true openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_nfs_options='*(rw,root_squash)' openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=10Gi openshift_metrics_storage_labels={'storage': 'metrics'} openshift_metrics_hawkular_hostname=hawkular-metrics.apps.sample.com openshift_metrics_cassandra_storage_type=nfs openshift_hosted_metrics_deploy=true openshift_hosted_prometheus_deploy=false openshift_prometheus_storage_kind=nfs openshift_prometheus_storage_access_modes=['ReadWriteOnce'] openshift_prometheus_storage_nfs_directory=/exports openshift_prometheus_storage_nfs_options='*(rw,root_squash)' openshift_prometheus_storage_volume_name=prometheus openshift_prometheus_storage_volume_size=10Gi openshift_prometheus_storage_labels={'storage': 'prometheus'} openshift_prometheus_storage_type='pvc' openshift_prometheus_alertmanager_storage_kind=nfs openshift_prometheus_alertmanager_storage_access_modes=['ReadWriteOnce'] openshift_prometheus_alertmanager_storage_nfs_directory=/exports openshift_prometheus_alertmanager_storage_nfs_options='*(rw,root_squash)' openshift_prometheus_alertmanager_storage_volume_name=prometheus-alertmanager openshift_prometheus_alertmanager_storage_volume_size=10Gi openshift_prometheus_alertmanager_storage_labels={'storage': 'prometheus-alertmanager'} openshift_prometheus_alertmanager_storage_type='pvc' openshift_prometheus_alertbuffer_storage_kind=nfs openshift_prometheus_alertbuffer_storage_access_modes=['ReadWriteOnce'] openshift_prometheus_alertbuffer_storage_nfs_directory=/exports openshift_prometheus_alertbuffer_storage_nfs_options='*(rw,root_squash)' openshift_prometheus_alertbuffer_storage_volume_name=prometheus-alertbuffer openshift_prometheus_alertbuffer_storage_volume_size=10Gi openshift_prometheus_alertbuffer_storage_labels={'storage': 'prometheus-alertbuffer'} openshift_prometheus_alertbuffer_storage_type='pvc' ####### openshift_master_overwrite_named_certificates=true openshift_set_hostname=True #openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']} [masters] master.sample.com openshift_schedulable=false [etcd] master.sample.com openshift_schedulable=false [nfs] master.sample.com openshift_schedulable=false [nodes] master.sample.com openshift_schedulable=false node1.sample.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true node2.sample.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
And Now you have to run install file by ansible:
# ansible-playbook -i /etc/ansible/origin_hosts openshift-ansible/playbooks/byo/config.yml -vvv
after about 20 minutes you will have your openshift :)
you can create user whith htpasswd
add access to all projects to your user
oc adm policy add-cluster-role-to-user cluster-admin user
now you can open your open shift in browser:
https://occonsole.sample.com
you can create user whith htpasswd
# htpasswd -b /etc/origin/master/htpasswd [user] [pass]
oc adm policy add-cluster-role-to-user cluster-admin user
now you can open your open shift in browser:
https://occonsole.sample.com
openshift-master-api not worked
ReplyDeleteDid you set your Hotst Name correctly?
DeleteCan you help me
ReplyDeleteWhat can I do for you? would you please give some details?
DeleteDid you try install OCP with openshift_node_kubelet_args activated ? It doesn't work with me, I get python errors.
ReplyDeletemy name is Akram BLOUZA. I just discovred your blog, I did the same thing in
Deletehttp://blog.wescale.fr/2018/01/21/simuler-une-installation-dun-cluster-openshift-prod-ready-en-30-minutes/
You can use following line in advance installation
Deleteopenshift_node_kubelet_args={'max-pods': ['40'], 'resolv-conf': ['/etc/resolv.conf'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}
or you can change it after installation.