BR#SP#SPRACE:* Performance*and*Spinoffs* Rogério*Iope* SPRACE,*NCC/UNESP*
The*SPRACE*Project* São*Paulo*Research*and*Analysis*Center* Timeline*! 2004* *Started*processing*DZero/Fermilab*data*! 2005* *OSG*official*site*! 2006* *CMS/CERN*associaUon*! 2008* *LHC*First*Beam*! 2009* *WLCG*MoU*signed*by*FAPESP:*official*Tier#2*facility*! 2012* *Higgs*Discovery* BR#SP#SPRACE** The*only*official*WLCG*Tier#2*LaUn*America*site* MoU*signed*between*FAPESP*and*CERN*#*20/April/2009* One*of*the*most*available*and*reliable*Tier#2*among*the*all*Tier#2*sites* Processes* Physics*analysis*jobs*and*Monte*Carlo*simulaUon* Storage* Datasets*for*EXOTICA,*HI,*SUSY* Network*connecUon* 10*Gbps*R&D*link:*São*Paulo*#*Miami*(Science*DMZ)* 2*x*10G*redundant*links*to*ANSP*(Academic*Network*of*São*Paulo)** 28/Aug/13* HEP*Workshop* 2*
Available*Resources* * Deployed*in*5*phases* * Current*status*! 13,698*HEP#SPEC06*! 787*TB*of*storage* 28/Aug/13* HEP*Workshop* 3*
CumulaUve*CPU*Hours*(one*year)* 28/Aug/13* HEP*Workshop* 4*
CPU*Hours*by*Facility:*US*CMS*&*SPRACE* SPRACE/ 28/Aug/13* HEP*Workshop* 5*
SPRACE*Upgrade*and*CMS*Pledge* 28/Aug/13* HEP*Workshop* 6*
Dataset*Download* SUSY*(15.6*TB)*+** HeavyIons*(3.9*TB)*+** ExoUca*(636.9*GB)* 5.6*Gbps* 28/Aug/13* HEP*Workshop* 7*
Data*Transfer:*Best*Result* 9.6*Gbps* 28/Aug/13* HEP*Workshop* 8*
North#South*Hemisphere*Record* 16.5/Gbps/at*peak*(2*X*8.26)* Sustained*for*one*hour* 28/Aug/13* HEP*Workshop* 9*
LHC*Networks:*LHCOPN*and*LHCONE* WLCG*main*traffic*flows:* T0*#*T1*and*T1*#*T1*:*LHCOPN*(in*producUon)* T1*#*T2*and*T2*#*T2*:*LHCONE*(being*implemented)* LHCONE*will*implement*different*services:** MulUpoint*Service*(VRF):*entering*producUon*phase*! SPRACE*is*about*to*be*connected* Point*to*Point*Service*(staUc*&*dynamic):*R&D* * 28/Aug/13* HEP*Workshop* 10*
SPRACE*Spin#off:*GridUNESP* 28/Aug/13* HEP*Workshop* 11*
GridUNESP*Project:*a*SPRACE*Spinoff* First*Campus*Grid*in*LaUn#America* Timeline*:* 2004:*Call*for*scienUfic*proposals* 2005:*Funding*requested*to*FINEP* 2006:*Proposal*approved*by*FINEP* 2007:*Project*starts* 2008:*Hardware*acquisiUon* 2009:*Datacenter*construcUon* InauguraUon*:**September*2009* 28/Aug/13* HEP*Workshop* 12*
GridUNESP:*7*sites,*33*TFlops* Central*cluster*(at*NCC)* 2048*processing*cores* 4*data*servers*(96*TB)* SAN*storage*system*(36*TB)* 10G*link*with*U.S.* 7*Secondary*clusters* 896*processing*cores* SAN*storage*system*(6*TB*each*site)* 1*G*link*with*central*cluster* 28/Aug/13* HEP*Workshop* 13*
Central*Monitoring*
Research*Projects*and*Users* ScienUfic*impact:* * 50+*journal*papers* 20+*conference*papers* 10+*theses*/dissertaUons* 28/Aug/13* Current*status:* 50*Projects* 250*Users* HEP*Workshop* 16*
GridUNESP*#*OSG*Partnership* GridUNESP:** The*only*VO*outside*US* Shares*the*infrastructure* with*all*osg*members* Technical*collaboraUon** Training*acUviUes** * 28/Aug/13* HEP*Workshop* 17*
CumulaUve*CPU*hours:*43+*million* 28/Aug/13* HEP*Workshop* 18*
ANSP*Grid*CA* Grid*CerUficate*Authority*for*the*State*of*São*Paulo*! IniUaUve*of*NCC/UNESP*! Managed*by*the*Academic*Network*at*São*Paulo* Hardware*Collocated*at*NCC* Two*Hardware*Security*Modules*(HSM)** AccreditaUon:* Approved*by*! TAGPMA*in*April*2012*! IGTF*in*August*2012* Included*in**TACAR*(TERENA*Academic*CA*Repository)*! October*2012* 28/Aug/13* HEP*Workshop* 19*
Partnership*with*Intel* 28/Aug/13* HEP*Workshop* 20*
Cloud*compuUng:*EduCloud*and*beyond* First*phase*(Mar#Aug*2012)* Cloud*infrastructure*(IaaS)*focused*on*educaUonal*acUviUes** Partnership*with*Sao*Paulo*Secretary*of*EducaUon*! Automated,*customized*load*balancing** Two*OpenStack*clouds*deployed*at*! Secretary*of*EducaUon*datacenter*! NCC/UNESP*datacenter* Second*phase*(Mar#Aug*2013)* Studies*on*Intel*TXT,*TPM,*and*OAT*technologies* Studies*on*HPC*bonlenecks*(VM*x*real*machines)* Next*steps:*new*project*proposal*under*discussion* Studies*on*integraUon*of*OpenStack*and*OpenFlow* Intel*SDN:*Open*Network*Plaoorm;*Partnership*with*Caltech* 28/Aug/13* HEP*Workshop* 21*
Intel*/*NCC*Manycore*TesUng*Lab* First*Manycore*TesUng*Lab*outside*USA* Servers*donated*by*Intel*! Intel*Xeon*Phi*coprocessors**! Suite*of*Intel's*sorware*development*tools** Users*can**! Test*the*performance*of*their*codes*in*a*highly*parallel*system*! Register*as*guests*into*NCC*databases*(LDAP)* AuthenUcaUon*/*authorizaUon**! Controlled*by*digital*cerUficates*issued*by*ANSP*Grid*CA* First*results:*hands#on*acUviUes*at* INFIERI*Summer*School*2013*(University*of*Oxford)* Intel*Sorware*Conference*2013*(NCC/SP*and*COPPE/RJ)* 28/Aug/13* HEP*Workshop* 22*
EduGrid:*a*Distributed*R&E*Plaoorm** A*general#purpose*experimental*plaoorm* experiment#driven*research*in*distributed**compuung*and*networking* provides*and*manages*a*network*of*vm s*and*library*of*images* A*learning*plaoorm* training*based*on*mooc*concepts* community*members**! Contribute*to*MOOC*resources*! InstanUate*VM s*according*to*their*needs* Hub*sites:** São*Paulo*(Brazil),*Miami*(USA),** Córdoba*(ArgenUna),*Bogotá*(Colombia),* San*José*(Costa*Rica),*Guadalajara*(Mexico)* Satellite*sites:* UFSCar,*UFABC,*IFSC/USP,*Poli*Elétrica/USP,*Unicamp*/*LNLS* FIU*(Miami)*&*CERN* 28/Aug/13* HEP*Workshop* 23*
Petrobras:*HPC*&*HSN*for*CTBS* 28/Aug/13* HEP*Workshop* 24*
Núcleo/Tecnológico/Regional/ /BS/ Technological*Center*of*the*Santos*Basin*(CTBS)* Vistas* Petrobras*Center*for*research** on*petroleum*exploitauon** at*the*pre#salt*layer* NCC/SPRACE*responsible*for*the* HPC*infrastructure*!!!! ScienUfic*compuUng* Real#Ume*visualizaUon** High#resoluUon*video*streaming* Training* High*Speed*Network*!!! 100*Gbps*(NCC#USP#CTBS)* ConnecUon*to*CENPES*(UFRJ)* InternaUonal*connecUons* Ms.*Foster*(President):* CTBS*should*be*in*place*unUl*2015* Contract*should*be*signed*by*the*State* UniversiUes*unUl*the*end*of*Sept.*2013* 28/Aug/13* HEP*Workshop* 25*
HPC:*200+*Tflops**
41 HSN:*100*Gbps*ring*+*secondary*links* Section 2. Configuração Proposta Section 2. Configuração Proposta Nesta sessão apresentamos a configuração detalhada em reposta à solicitação da UNESP NCC. 2.1 Hardware para Processamento 5 x Rack de 42Us; 4 x Rack Leader Node para gerenciamento de Boot Remoto em Alta Disponibilidade 2 x Admin Node para gerenciamento total do cluster em Alta Disponibilidade Console LCD de 1U com teclado e mouse para utilização de gerenciamento remoto através do Admin node via IPMI 2.0; 8 x Blade Enclosure Pairs com 36 slots e Dual-Plane Infiniband FDR 32 x Switch Blades Premium Infiniband FDR Topologia Enhanced HyperCube Fontes e Blowers Redundantes Nesta sessão apresentamos a configuração detalhada em reposta à solicitação da UNESP NCC. 2.1 Hardware para Processamento 5 x Rack de 42Us; 4 x Rack Leader Node para gerenciamento de Boot Remoto em Alta Disponibilidade 2 x Admin Node para gerenciamento total do cluster em Alta Disponibilidade Console LCD de 1U com teclado e mouse para utilização de gerenciamento remoto através do Admin node via IPMI 2.0; 8 x Blade Enclosure Pairs com 36 slots e Dual-Plane Infiniband FDR 32 x Switch Blades Premium Infiniband FDR Topologia Enhanced HyperCube Fontes e Blowers Redundantes 288 blades IP 113 cada um configurado com: - 2 x Intel Xeon Eight-Core E5-2680 de 2.7 GHz/20M cache, 8.0 GT/s - 128 GB RAM DDR3 1600 MHz RDIMM - 2 x chips single-port Infiniband FDR on-board - ports gige dedicado para gerenciamento 2 x Login Nodes SGI C2108 configurado com: - 2 x Intel Xeon Eight-Core E5-2665 de 2.4 GHz/20M cache, 8.0 GT/s - 64 GB RAM DDR3 1600 MHz RDIMM - 4 x 600 GB SAS 15K RPM HDs - DVD-RW - 2 ports Infiniband FDR - 2 ports 10 Gigabit SFP+ - ports gige dedicado para gerenciamento - Fontes Redundantes 2 x Gateway Nodes SGI C2108 configurado com: - 2 x Intel Xeon Eight-Core E5-2665 de 2.4 GHz/20M cache, 8.0 GT/s - 64 GB RAM DDR3 1600 MHz RDIMM - 4 x 600 GB SAS 15K RPM HDs - DVD-RW - 2 ports Infiniband FDR - 2 ports 10 Gigabit SFP+ - ports gige dedicado para gerenciamento - Fontes Redundantes 41 288 blades IP 113 cada um configurado com: - 2 x Intel Xeon Eight-Core E5-2680 de 2.7 GHz/20M cache, 8.0 GT/s 100*Gbps* - 128 GB RAM DDR3 1600 MHz RDIMM - 2 x chips single-port Infiniband FDR on-board - ports gige dedicado para gerenciamento 2 x Login Nodes SGI C2108 configurado com: - 2 x Intel Xeon Eight-Core E5-2665 de 2.4 GHz/20M cache, 8.0 GT/s - 64 GB RAM DDR3 1600 MHz RDIMM - 4 x 600 GB SAS 15K RPM HDs - DVD-RW - 2 ports Infiniband FDR - 2 ports 10 Gigabit SFP+ - ports gige dedicado para gerenciamento - Fontes Redundantes 2 x Gateway Nodes SGI C2108 configurado com: - 2 x Intel Xeon Eight-Core E5-2665 de 2.4 GHz/20M cache, 8.0 GT/s - 64 GB RAM DDR3 1600 MHz RDIMM - 4 x 600 GB SAS 15K RPM HDs - DVD-RW - 2 ports Infiniband FDR - 2 ports 10 Gigabit SFP+ - ports gige dedicado para gerenciamento - Fontes Redundantes Section 2. Configuração Proposta Nesta sessão apresentamos a configuração detalhada em reposta à solicitação da UNESP NCC. 100*Gbps* 100*Gbps* 41 2.1 Hardware para Processamento 5 x Rack de 42Us; 4 x Rack Leader Node para gerenciamento de Boot Remoto em Alta Disponibilidade 2 x Admin Node para gerenciamento total do cluster em Alta Disponibilidade Console LCD de 1U com teclado e mouse para utilização de gerenciamento remoto através do Admin node via IPMI 2.0; 8 x Blade Enclosure Pairs com 36 slots e Dual-Plane Infiniband FDR 32 x Switch Blades Premium Infiniband FDR Topologia Enhanced HyperCube Fontes e Blowers Redundantes 288 blades IP 113 cada um configurado com: - 2 x Intel Xeon Eight-Core E5-2680 de 2.7 GHz/20M cache, 8.0 GT/s - 128 GB RAM DDR3 1600 MHz RDIMM - 2 x chips single-port Infiniband FDR on-board - ports gige dedicado para gerenciamento 2 x Login Nodes SGI C2108 configurado com: - 2 x Intel Xeon Eight-Core E5-2665 de 2.4 GHz/20M cache, 8.0 GT/s - 64 GB RAM DDR3 1600 MHz RDIMM - 4 x 600 GB SAS 15K RPM HDs - DVD-RW - 2 ports Infiniband FDR - 2 ports 10 Gigabit SFP+ - ports gige dedicado para gerenciamento - Fontes Redundantes 2 x Gateway Nodes SGI C2108 configurado com: - 2 x Intel Xeon Eight-Core E5-2665 de 2.4 GHz/20M cache, 8.0 GT/s - 64 GB RAM DDR3 1600 MHz RDIMM - 4 x 600 GB SAS 15K RPM HDs - DVD-RW - 2 ports Infiniband FDR - 2 ports 10 Gigabit SFP+ - ports gige dedicado para gerenciamento - Fontes Redundantes 28/Aug/13* HEP*Workshop* 27*
Joint*IniUaUves*with*Padtec* 28/Aug/13* HEP*Workshop* 28*
Salt*Lake*City:*100G*link*demo* 28/Aug/13* HEP*Workshop* 29*
Denver:*improved*demo*being*planned* 28/Aug/13* HEP*Workshop* 30*
Planning*for*the*Future* 28/Aug/13* HEP*Workshop* 31*
The*São*Paulo*State*Grid* *SPSGrid* Infrastructure*for*the*R&E*insUtuUons*of*São*Paulo* Inspired*by*NYSGrid*(New*York*State*Grid)* ConsorUum*model** Involving*funding*agencies*(FAPESP/ANSP)*and*R&E*insUtuUons* Grid*OperaUons*Center* Engineering,*Monitoring** and*consulung**services* EducaUon*and*Training** Provided*by*the*EduGrid* Grid*training*over*VM* 28/Aug/13* HEP*Workshop* 32*
EXTRAS* 28/Aug/13* HEP*Workshop* 33*
SPRACE*#*Timeline*and*Milestones*(I)* 2003* NOV:*S.*F.*Novaes,*E.*M.*Gregores:** ThemaUc*Project *approved*by*fapesp* Procurement*processes*for*buying*hardware*starts* 2004* JAN:*InstallaUon*of*physical*infrastructure*starts* at*university*of*são*paulo* MAR:* Phase*I *deployed:*sprace*starts*processing*d0*jobs* SEP:*One*million*MC*events*processed*for*D0*collaboraUon* NOV:*SC 04*record*of*data*transmission*(~*2*Gbps)* 2005* MAR:**3.5*million*MC*events*&*1*TB*of*data*generated*for*D0* JUN:* Phase*II *deployed:*115*cores*&*12*tb*storage* JUL:***US*Open*Science*Grid*(OSG)*comes*to*live* AUG:**SPRACE*signs*membership*with*OSG* 28/Aug/13* HEP*Workshop* 34*
SPRACE*#*Timeline*and*Milestones*(II)* 2006* New*WDM*channel*(ANSP):*SPRACE*@*1*Gbps* SPRACE*starts*processing*jobs*for*CMS*VO*of*OSG** SEP:* Phase*III *deployed:*180*cores*&*12*tb*disk* 2007* New*sysadmins*for*SPRACE*starts** Discussion*(ANSP)*to*deploy*a*CA*for*the*State*of*São*Paulo/ New*UNESP*campus*connected*to*MetroSampa*network* 2008* MAR:*SPRACE*domain:*from* if.usp.br *to* sprace.org.br ** SEP:*LHC*first*beam* OCT:*New*ThemaUc*approved* NOV:*NCC*is*officially*created* 28/Aug/13* HEP*Workshop* 35*
SPRACE*#*Timeline*and*Milestones*(III)* 2009* JAN:*ConstrucUon*of*NCC*datacenter*at*new*UNESP*campus*starts* APR:*FAPESP*signs*MoU*with*WLCG:*BR#SP#SPRACE*is*now*official* SPRACE*cluster*transferred*from*USP*to*UNESP*(~*5*tons*in*32*hours)* SEP:*InauguraUon*of*NCC*datacenter*and*GridUnesp*infrastructure* NOV:*SC 09*in*Portland:*new*record*of*data*transmission*(~8.8*Gbps)* 2010* JAN:*New*router*(FAPESP):*NCC*connected*to*KyaTera*&*MetroSampa* AUG:*SPRACE* Phase*IV *deployed:*200*tb*of*storage* New*amplifiers*(Padtec)*on*Kyatera*network:*10*Gbps*from*NCC#ANSP* DEC:*São*Paulo*OSG*School:*5*experts*from*US:*30+*parUcipants* 2011* KyaTera*network*at*NCC*in*producUon:*10G*(commodity)*+*10G*(R&D)* MAR:*GridUNESP*presented*at*OSG#AHM 11*(Harvard*Univ.)* Partnership*Intel:*development*of*cloud*at*the*Secretary*of*EducaUon* 28/Aug/13* HEP*Workshop* 36*
SPRACE*#*Timeline*and*Milestones*(IV)* 2012* MAY:*NCC*Workshop*for*training*system*administrators*in*HPC* JUN:*SPRACE* Phase*V *deployed:*~850*tb*storage*(1*pb*raw)* AUG:*IGTF*approves*ANSP*Grid*CA* NOV:*SC 12:*SPRACE#Padtec#Caltech**demonstraUon*of*100G*link* 2013* JUL:*NCC*server*used*at*INFIERI*Summer*School*(Oxford)* AUG:*NCC/SP*and*COPPE/RJ: Intel*Sorware*Conference * NOV:*SC 13:*OpenFlow*exercise*&*new*demo*with*Padtec* 2014* EduGrid*project*with*Intel* Cloud*technologies*for*CMS*with*Caltech* SPRACE* Phase*VI *should*be*deployed* SC 14*demonstraUon* 28/Aug/13* HEP*Workshop* 37*
Availability = Uptime / (Total time - Time_status_was_UNKNOWN) Reliability = Uptime / (Total time - Scheduled Downtime - Time_status_was_UNKNOWN) Reliability*and*Availability* HS06 : Installed capacity of the site measured in HEPSPEC06 (HS06) Reliability and Availability for Federation - Weighted average of all sites in the Federation based on installed capacity(hs06) Colour coding : N/A < 30% < 60% < 90% >= 90% Federation AT-HEPHY-VIENNA-UIBK 85 % 85 % AU-ATLAS 99 % 99 % BE-TIER2 91 % 91 % BR-SP-SPRACE 100 % 100 % CA-EAST-T2 99 % 97 % CA-WEST-T2 94 % 94 % CH-CHIPP-CSCS 100 % 96 % CN-IHEP 100 % 100 % CZ-Prague-T2 89 % 83 % DE-DESY-ATLAS-T2 98 % 98 % DE-DESY-GOE-ATLAS-T2 88 % 88 % DE-DESY-LHCB 100 % 100 % DE-DESY-RWTH-CMS-T2 97 % 97 % DE-FREIBURGWUPPERTAL 99 % 99 % DE-GSI -100 N/A % -100 N/A % DE-MCAT 93 % 93 % EE-NICPB 99 % 99 % ES-ATLAS-T2 100 % 100 % ES-CMS-T2 97 % 96 % Federation Availability Reliability Availability Reliability IN-INDIACMS-TIFR 2 % 2 % IT-INFN-T2 95 % 93 % JP-Tokyo-ATLAS-T2 100 % 100 % KR-KISTI-T2 68 % 61 % KR-KNU-T2 99 % 99 % NO-NORGRID-T2 99 % 99 % PK-CMS-T2 99 % 99 % PL-TIER2-WLCG 99 % 97 % PT-LIP-LCG-Tier2 97 % 97 % RO-LCG 99 % 99 % RU-RDIG 97 % 97 % SE-SNIC-T2 97 % 96 % SI-SiGNET 100 % 99 % T2_US_Caltech 99 % 99 % T2_US_Florida 91 % 91 % T2_US_MIT 96 % 96 % T2_US_Nebraska 100 % 100 % T2_US_Purdue 91 % 91 % T2_US_UCSD 99 % 99 % ES-LHCb-T2 93 % 93 % T2_US_Wisconsin 80 % 80 % 28/Aug/13* HEP*Workshop* 38* FI-HIP-T2 96 % 96 % TR-Tier2-federation 84 % 82 %
WLCG*Resources* 28/Aug/13* HEP*Workshop* 40*
WLCG*CompuUng*Model* 28/Aug/13* HEP*Workshop* 41*
The*CMS*CompuUng*Model* 28/Aug/13* HEP*Workshop* 42*
28/Aug/13* HEP*Workshop* 43*
SC13*Plan* 28/Aug/13* HEP*Workshop* 44*
Xeon*Phi *lab*sessions:* 16*anendants*from* Bristol*University* Purdue*University* University*of*Oxford* RWTH*Aachen*University* University*of*Leicester* Observatoire*de*Paris* Amsterdam*Sci.*Instr.* IN2P3* CERN* 28/Aug/13* HEP*Workshop* 45*
Xeon*Phi *lab*sessions:* 120*anendants* 28/Aug/13* HEP*Workshop* 46*
GridUNESP*Research*Areas* Biology*and*Biophysics* Molecular*Dynamics* ComputaUonal*Biophysics* EEG*&*Apnea* Proteomics* Genomics*and*PhylogeneUcs* Amphibians*at*high*elevaUons* Chemistry* Modeling*for*new*materials* Quantum*chemistry* Intermetallic*phases* CoordinaUon*compounds* VibraUonal*circular*dichroism* Computer*Science* Data*mining*and*IPFIX* Grid*Algorithm*opUmizaUon* Numerical*methods* Geosciences* Terrestrial*deformaUon* Plaoorm*modeling* Material*Science* Superconductor*vorUces* Electronic*Structure* Photo#dissociaUon*of*polymers* Strong*correlated*electrons** Meteorology* Historic*precipitaUon*in*S.*Paulo* MulU#scale*interacUon* Physics* Chaos*and*phase*transiUon* La~ce*QCD* Dark*Energy*Survey* Few#body*systems** High#energy*physics* 28/Aug/13* HEP*Workshop* 47*
Formal*Agreements*with*Intel*! UNIVERSIDADE*ESTADUAL*PAULISTA* JÚLIO*DE*MESQUITA*FILHO! NÚCLEO'DE'COMPUTAÇÃO'CIENTÍFICA!!!!!!!!! ACORDO& DE& COLABORAÇÃO& CELEBRADO& ENTRE& A& UNIVERSIDADE& ESTAUDAL& PAULISTA& JÚLIO& DE& MESQUITA& FILHO & & UNESP& E& A& INTEL& SEMICONDUTORES& DO& BRASIL<DA.&!!!!!!! A! UNIVERSIDADE! ESTADUAL! PAULISTA! JULIO! DE! MESQUITA! FILHO,! autarquia! estadual!de!regime!especial,!criada!pela!lei!nº!952!de!30.01.1976,!com!sede!na!rua! Quirino!de!Andrade,!215,!Centro,!CEP!01.049"010,!São!Paulo"SP,!inscrita!no!CNPJ/MF! sob! nº! 48.031.918/0001"24,! doravante! designada! simplesmente! UNESP,! neste! ato! representada!por!seu!magnífico!vice"reitor!no!exercício!da!reitoria!de!acordo!com!o! Art.! 34,! I! e! Art.! 31! de! seu! Estatuto,! Prof.! Dr.! JULIO! CEZAR! DURIGAN! e! INTEL! SEMICONDUTORES!DO!BRASIL!LTDA.,!inscrita!no!CNPJ/MF!sob!o!nº!57.286.247/0001" 33,!com!sede!na!Av.!Dr.!Chucri!Zaidan,!nº!940,!8º,!9º,!10º!e!11º!andares,!conjuntos!81,! 91,!101!e!111,!Vila!Cordeiro,!CEP!05.4583"906,!São!Paulo"SP,!neste!ato!representada! por! seu! Gerente! Geral! no! Brasil,! Sr.! FERNANDO! MARTINS,! doravante! designada! simplesmente! Intel.! UNESP!e!Intel!podem!ser!designadas!individualmente!como! Parte!ou!coletivamente! como! Partes.!!! CONSIDERANDO:!! Que! a! Intel,! e/ou! suas! Afiliadas! desenham,! desenvolvem,! fabricam! e! vendem! certos! microprocessadores,!hardware,!programas!de!computador!e!produtos!de!informática,! rede! e! comunicações.! Para! fins! deste! documento,! Afiliada! significará! qualquer! pessoa!jurídica!que,!direta!ou!indiretamente,!seja!controlada,!controladora!ou!coligada! da!intel;!!! Rua$Dr.$Bento$Teobaldo$Ferraz,$271,$Bloco$II,$Barra$Funda,$São$Paulo$"!SP,$CEP$01.140"070! Página!1!de!5! 28/Aug/13* HEP*Workshop* 48*
InternaUonal*ConnecUvity*