cineberisso.com.ar

chainlink repair eugeneHyperledger Avalon Introduction


Callen Schaub on Instagram: “The Fire Inside” Elements ,1,752 Likes, 31 Comments - Callen Schaub on Instagram: “The Fire Inside” Elements Collection Wave One - Painting 412 Element: Fire Notes: DNA VeChainInsider Twitter Apr 9, 2023 · VeChainInsider Retweeted. vechain. @vechainofficial. ·. Feb 11. Tomorrow s Steering Committee Candidate withdrew their submission, so, you have time to review our previous excellent candidates sessions instead! We re back on Monday 13th, with our final speaker on Tuesday 14th ahead of voting kick-off. Catch up on the sessions below . chainlink repair eugene Hyperledger Avalon Introduction
chainlink repair eugene Buy Dogecoin DOGEHow to Buy Dogecoin... Hyperledger Avalon Introduction
hello so today we not so much deep dive had actually going like a high-level architecture and then were going to have the separate deep dive sections probably okay later on the different areas because its kind of like hard to go deeper in the one hour that lets invite okay Justin so today were going to talk first the start about the background and usages the abalone covering because that was the importance of a would be on the same page and then were really going to talk about the architecture and the plans that what were going to do with the current architecture how were going to advance that and achieve the our goals and the plans in particular interesting to hear maybe if we have time a little bit left my folks may talk about the what they plan to do that would be interesting follow-up to the discussion so first of all just to be on the same page the idea of the abalone is actually to achieve the scalability in privacy by a floating Leon execution of chain so the arm if we take the traditional blockchain for the execution the notes that process the data they actually obtain the old data in the data processed on chain and that is all the great and the but its everything Pistons birth and everything has to be replicated owned every node and obviously that is has some limitation for our enterprises then what we trying to we trying to utilize some kind of trusted compute options and the volume currently supports the SGX trusted execution environment that runs all chain in the for example enterprise facility on the cloud service so in this case which shows that these enterprise a has a trusted institutional environment that processor date so in these case whenever the enterprise is going to communicate the exchange data encrypted format and the data being processed they are on the server at the neutralize a but they protected in the process in the way that they cannot be cannot be seen they by enterprise a either and even though this in the blockchain keeps just residual data all the processed data effectively the cryptographic proofs so in this case the diagram shows the trusted computer analyst enterprise a site but it actually can run on either enterprise a enterprise besides so I think we the last time a little bit briefly touched on the possible use cases for the arm Avalon that we trying to address and Im going to go through to them one of them focused more on the scalability and that is the currently shown here for the IDT used just we have the warehouse warehouse customer its humidity and temperature data and if on the left hand side if it does that in the without the trusted execution environment it can report all the data to the blockchain obviously there is a good chance that the blockchain is going to be over load yet visit amount of data being sent to the blockchain so one of the ways to handle that is to have a trusted execution environment in warehouse that will collect this data and it will say periodic in potentially in frequent updates to the blockchain but these data being processed in the trusted compute environment they still can process the same arm trust level capabilities or similar to the blockchain so we can establish the trust also the whole IOT solution another top type of usages that we tranked other as they related to convinced confidentiality so lets look at the example situation when they business trying to create a request a medical insurance and in this case business need has a health healthy in place and they want to have a discount so this allowed the insurance company to get access to the employees medical records and they provide these success through the blockchain sites known to the insurance company they make step two they make a request request for quote and put these in the cold queue or also on the top chain the insurance company gets the quote request and its once - lets calculate what the risk of providing insurance to this in play pool so it will going to calculate the risk for example of the heart disease for that its going to submit the request that youre going to do processing in the its own transit execution environment and its going to submit request to the one no motorists at execution environment at the hospitals the hospitals dont have to share that data even in the encrypted format hospital step 5 verify that the access to the medical data of this particular subset of the patients was indeed granted granted by the business so theyre going to do process the arm risk for the subset of the employees that is actually available in this hospital and the intermediate results going to be returned to the insurance company and insurance company were going to process the old results also in the trusted execution environment the return the final number the risk to the insurance company and it step six that will provide the quote and business gets the results in this step seven the key here that the information about the the whole information processing trusted execution environments hospitals going to use trusted execution environment to protect the arm algorithms that actually going to be provided by insurance company that is IP that the insurance company wants to protect the insurance company going to use in this case rest execution environment to make sure that intermediate results they provided by the hospitals wont be arm cannot be used to infer information about the particular employees example if the hospital has a very small number of in place only though this is to be kind of like use cases that should give the general understanding of what is Avalon trying to address so moving to the architecture now any questions on the background and the on the use cases they went pretty quickly sir than because I think we talked about that I just wanted to make sure that everybody on the call is actually has the same understanding with what we trying to address here Eugenia smack hmm okay and just again just to sort of clarify that one chain off chain thing that really Avalon is about performing confidential computations off chain and for the moment that has no effect on the state of the chain is that correct or whats whats the kind of objective there so the state of the chain we do assume that its going to be for example smart contracts or chain code that we interact to submit and I will talk a little bit later about that but so we do assume this several smart contracts for example Iranian control of chain of several chain codes that is actually handled the registration all the but right so its really the registration on the other parts of it rather than actual State thats being executed thats been modified by another lines so the avilon currently stateless and we working in the EI trusted computer work group on edge into this state and actually this is later were going to end this state so current it statements lets go get so currently its stateless but there is a plan to incorporate some notion of state later on thats right any other questions no you can eat stomach here I just want a little bit answer its a learning process so this technology is kind of difficult to start playing with and of course that the ultimate goal is to standardize the interaction between trusted data and trusted computing but the learning points here at the moment is that we are using smart contracts as a registered for discovery and for commitment of the data that it is to be processed but this is only a stepping stone yep thats quick Thank You Shawn okay any other questions okay well lets look at the architecture so the as I said today were going to look at the pretty much higher level the architecture oh the Avalon and I think and its evolving its not the completed so were kind of an intermediate state we have the little bit limited individuals at the moment that is evolving like Tremec said boys in the definition and the implementations the later on were going to have several sections looking about the digital components and how the we do plan to address them so while talking about the architecture I will talk a little bit about the both current and the plant architecture but the its pretty much a higher level so its probably should not be that critical but I will point out if the what we havent invented at what we have not implemented so the fundamentally the there are three major components involved in the oval so one of them is a request our application and request their application that can be a front-end UI or it can be some form of script or it can be enterprise application and in some cases even though its shown its outside of the blockchain at meri actions could custom smart contracts or chain quote that is actually going to run on blockchain the important part that we trying to isolate the application developers as much as possible from the intricate details of Avalon so the idea that our application would utilize the Avalon provided connector library to make the interaction with the blockchain contractable own specific blockchain contracts and is the trusted computer service as transparent and easy to use as possible on the blockchain the regressor app is actually has two ways to communicate to trusted computer service and trusted computer services the primary opponent there of the architecture and I will talk about that bit later so there are two ways to do that so one of them is through the blockchain the in blockchain includes smart well technically it includes for smart contracts but I combine two of them into one so the there is a worker registry contract there is also the contract that actually least potentially multiple worker registers if its not even though this is not requirement in general assumption that going to be one worker registry contract for the trusted computers so the worker religiously contract can represent one trusted computer service even though its possible to combine the multiple trusted computer service and the same workers under thats probably more easier from maintenance perspective do not do that work order Q is used to submit the tasks from the armed procrastination to the trusted computing service and the work order received contracts is maintained information about the arm prove that work the tasks or now terminology work orders were completed and as I mentioned there are two ways to do that in one one way is actually through the blockchain but there is another way in Reverse replication can actually communicate the trusted computers directly its really more traditional json-rpc listener and that is done to address the scalability usages the ones that I described in the ideas case and in case of blockchain had shown the blockchain kind of retrieves the data from the blockchain but in reality it can rely on the notification alert mechanism and be notified by the blockchain and instead of the pooling click for the request from the blockchain on the trusted computing service we really have the just few components even though they may be complex by itself but fundamentally there are these connectors the blockchain connector and the RPC connector that we have in and then there is the worker registering work order queue manager so there is going to be couple databases one of them going to maintain information about the available workers and another another going to maintain information about the work orders work order cure theyre going to have a queue manager that is going to handle these queues the current implementation of the screen manager pretty basic just puts the inappropriate tables and we implementation uses the human DB alight in DB for its processes and it puts it just the data in these databases and process them my teens once were going to have the state management were going to have to deal with more complex algorithms and we will have to add ability to deal with dependencies and that is actually a Mick to your question that is power plants and they we were going to have to implement some of them on chain and some of them option here to achieve it in a state management so another the bigger part is the arm workers themselves so currently as I mentioned we implemented the Intel SGX transit execution environment worker but the architecture has actually opened to different type of workers and the least just few possible examples here so trusted trans on ER he is one of the candidates to be added soon and the other options and the zero knowledge prove an NPC that is actually already listed in the arm he is specification that when general follow and extending that also to other block chains so in an important part is the orchestration that allows to handle the multiple workers in efficient manner in the I would say following the Elastic Compute model so you can dynamically increase and decrease number of workers the another important part is actually dealing the with multiple enclaves or trusted execution environments that would represent the same worker to the requester so really talking about the worker poles and the current implementation I just as I said that you want to mention that current implementation created limited we have the single worker and that is the biggest part for us to extend that so we havent seen worker allows us to do him pretty much its kinda like skeleton on the implementation but a very important part to make sure that we add these worker pole support and we use an orchestration we dont plan to build our own orchestration engine instead of that were going to utilize existing integral in the orchestration agents and the plan is actually to start by integrating kubernetes work order when the the another important part that wanna to focus the current workers actually handles the key management and its going to and it also does the processing all the work orders which is a k4 the trusted test execution environments and different clays with the minimally sized TCB but our plans include also including the arm frame include framework that actually allows to use the pretty large applications the examples would be like by do GE or the graphene and there are number of other commercial and the public publicly available environments development environments and those increase the chances of the potential user errors and application errors so another important area is to maintain the key management separately from the opposition of work orders this also has not been implemented but there is another important area for us to work on and the one more area the there are some situations when the utilizing trust of the clave working all the components inside of the the utilizing kind of like the whole execution inside of transit in cliff possible for example to process some kind of algorithm but in many cases it does require access to the external data simply because the input data or output data may be too large to fit into the our work order request itself to be embedded so for this purpose we have what we call inside out API that allows access to the external data source and that inside the out API we provide the general framework that allows to make the call out but the actual details specific is going to provide it by application and those specifics can include the for example the file system or the main code database or may include the access to external web service this is really going to be application specific and finally how are we going to trust this external data source in through this framework so trust the evidence or is really going to be application specific because one obviously doesnt know what data has to be processed so that is going to be the workload the orange part is going to know the what data to try what sources to trust the other one will provide the framework that will make that easier to validate and access the data but decision about to trust or not trust going to be done by the application specific world okay a few words the when I mentioned on the workloads there are different tarp types of workloads so there are workloads are going to be known fully known and the compile time and theyre going to be built in the art in claims and theres going to be those that actually not not on at compile time they for example scripts and the or in clay for the trust execution environment is going to have the interpreter for the scripts for example Python scripts and the but the actual workflow is going to be pricing written Python they can be provided dynamically at runtime obviously that will require the some form of mechanism to trust those also they can be potentially implemented whitelist blasts or that can be done by the requesters or it Westerville signed the arm full workload so well provide the what kind of like the set of scripts as actually can be execute and finally a little bit about the colors here so the blue colors represent what is going to be the Avalon framework and the orange colors they actually represent the application specific code so the idea to make sure that the Avalon provides the framework that the application developers dont have to do I dont have to provide the custom application specific code only on the client side the requestor application or in the Quadrangle vision the transit execution or environment the work locked and we just said their teacher but note that there is really the already arm tutorial how to build the arm world application which is just shows the how easy to create the minimal workload for the Avalon and thats available in the github already Im not going to go through that today but youre welcome to go and see that for yourself that actually the boilerplate complete code for the orange parts is actually pretty small and easy any questions on the general architecture slide and then I plan to go through the workflow and the fugitives or the building the trust within the you yeah how would you how to describe smart contracts and blockchain the top left corner so smart contract this is aetherium smart contracts in this case and I just couldnt come up with the generic term so if its developed be in the etherium that would be the smart contract in case or the fabric that would be chain code in case of certain said will be transaction brundin right so my question was to understand since we are using blockchain to maintain worker history and then work order Q and memory sips how different organizations play a role in this okay so in general what we expect that the Wii Avalon will provide the for smart contracts as I mentioned that I didnt least one of them here and those smart contracts follows the EA of chain trusted computation so the API has defined this specification they define for the its the room but the pretty easily can be ported to the arm like fabric chain code and that work is actually going on already and the API is for the all these contracts is relatively simple its kind of like get put and lookup apis for all of that they going to have some internal logic behind that is a behind but its going to be pretty straightforward in terms of the applications like you mentioned whats going to play there are two ways how the application may use that so one of them they may just decide to modify their own smart contracts for example because of work orders they may want to use the different policy that were going to use to maintain the old work orders how many and how quickly to discard the work orders for example and another one these applications may have the extensions on additional contracts going to coordinate with the border crews for example in case of attested oracles there is a kind of popular API called Town Crier and that API includes the or chain link the I think initially it was Town Crier now its called chain link that includes a number of API so these type of smart contracts can actually be combined with work order queuing maybe even this worker registry and that is actually how the application the application developers can use these smart contracts and extend and modify them another one is the work order receipts this is kind of like very flexible I would say the placeholder on definition in the specification now and its intentional because receipts may be relatively simple to make sure that processor all the workload just submits photographic proof that yes these work worker submitted at practice this for business cash value and that is the hash value of the result and thats its signed by the water oh it can be complicated that will include the some kind of a key station we can use with among the multiple purchase cop it thank you okay any other questions okay so before flow all the processing go over the overall the Avalon processing so there are several phases all the execution and they can like go one by one so that so one of them the in clave execute the include registration so in this case and Im going to talk in the context all the Intel SGX which is currently implemented in the other transit compute options before always they would change slightly the article substantially during the include registration and potentially slightly during the in clave discovery but the other part should stay generally similar so the during this phase the the TCF well sorry i forgot to change it to the project used to be called CCF so I still sometimes use the TCF instead of a valence and its not TCF of middleware its Avalon middleware that first that will create a worker then the worker will generate the keys and the generates its own quote and return it back to the arm to the Avalon middleware and then the Avalon will make the submit the verification report through the attestation framework for the producing the the verification for producing the verification report the current implementation uses into is ontology station service and the future we plant and the support for the d-cup framework which is likely to be more popular and more common in the near future so after that once we have a verification report and we have a key public use were going to be submitted to the blockchain that serves as an intermediary here and the information about this worker recorded on the blockchain the the Avalon middleware may create multiple for the increase in this case theyre going to represent the pool of the workers but the important part that the same record on the blockchain going to represent all these pools and the Avalon will have an ability to reveal the work orders to any of the workers in the pool kind of like following the policy which all the workers have available on at that time so that is actually very important feature for the scalability in the kind of like more traditional cloud service provider way how they tend to do their work the next step is din enclaved discovery so in this case the requester application is actually makes a call to develop chain and it looks for the appropriate workers so the idea taxi to find the worker that does the right type of tasks then the requester needs once the workers found the the library the eregistry PI but in reality this is the library provided by Avalon is actually verifies and stores the in clave gestation verification report and its keys and I will have the next slide that I will talk how the verification what does it really entitled but the high level in the verification report provided by the AIA service is verified first and then it checks then the keys is actually match the create the kind of like trust chain then there is the work by the way is there any questions on these include registration required discovery stages okay then going to the work order invocation in this case the requestor application prepares a work order request and were going to have additional slide to talk about the what is really involves in that in term of the security and the work order submitted to the blockchain and as I mentioned before that another option to submit that directly through the JSON API to the GCS in this case the diagram shows that circuit actually blockchain to work order request smart contract the middleware gets the work order requests in this case it shows that its post for the work order plans but the notification mechanism can be used or in addition to that and then it and dispatches that truly a pull and you the during list dispatch one all the workers will pick up the work order and start to process that involves these three sub steps so one of them is actually process work order request that the will involve decrypting the data verifying the integrity of the request the actual execution of the work order by the application workload to a specific workload that may optionally need to access external data and when result is returned to the arm to the worker core how the kind of like Avalon code Avalon will prepare the arm request and that request will include the encrypting the data and the signing the data and finally the response will be submitted to block chain and the eventually it will be retrieved by the requester from the blockchain and thatll be processed and the work order will be processed that will also include the validating the integrity and decrypting the data and these security specific steps Im going to have the additional slide under equations on the overall position flow okay then the few words about the chain of trust so in these case the we have the three important elements the that ensure that trust between the requester and the water so first of all it starts from the attestation verification report as I mentioned that we currently use the is service to authorize the report and were going to work on the decomp implementation and that the consolidation going to be painted in the future to the to the Avalon in either case there is the educational education report and the important part that includes a field called report data so in addition to the verification report the in clave will create inside trusted code two key pairs one of them for signing and verification of signatures and another one for the arm encryption and that is the the second key is the set P 256k one key now but the boss implementation specification allowed to change the type of keys and the encryption key is an RSA key now and the also can be by the spec technically change but implementation allows to use only one key at this time so the the report Delta include the hash value over the verification public key and include verification public key is used to assign one or more encryption public keys that is actually how the flow of the arm of the chain of trust is established to make sure that the requester can trust when submitted to the arm requests to the worker record requests to the worker it can be sure that likeliest can be seen in clear only by the specific worker and it can actually verify that the processing was done by the specific worker as well any questions from the chin of trust okay so the few words about the work order confidentiality and integrity the there is a number of steps that has been the Daiya done by Avalon to ensure that they will preserve the confidentiality integrity of the work order acquiesced and work order response so the only requester is actually starts by generating one time symmetric key that he is used to encrypt the data it then the are a harsh value of the request has been calculated and the hash value of the request is also being the arm encrypted with the arm with this key symmetric encryption key symmetric encryption key is encrypted with the are in the increased public key and optionally the request else also can be signed by the requester and we as you conceded what are the reasons why the signing is optional even though in many cases is probably going to be used but we also have another integrity mechanism the arm as the encrypted hash value is to handle the anonymous the requests when the requester is not know when these requests arrives to the worker worker first describes these symmetric encryption key it decrypts the worker data it calculates the hash of requests it decrypts the hash value provided in the request if the and compare that with the calculated values it signature is provided it also verifies the signature using the requesters key and then it processes the work order it encrypts the work order response data using the same key that was provided in the trust it calculates the hash value of the response and it sends the hash value with its signing key so this is the steps that has actually performed during the work order submission and work order result processing any questions on these flow and by the way its also described in the specification but the specification and not said that there is a word described exactly in the same format but in January it is there as well hey Jim Jim here is whats the role of the hash since its its only the target worker is able to decrypt the payload its it should have a pretty high confidence that data has been manipulated in transit so whats the whats the rod the hunch is that for little to use into the boundary so there are two or two values in the using these approach so one of them not all data may be encrypted so in some cases the data may be sent in clear some requesters extra some work orders may require only integrity checks not the data encryption they may be submitted and clear and the second one is there is one complication that I didnt mention here is that the data may be submitted by the multiple parties and they may use their own keys and or in general even by these in case of the single requester the there can be multiple parts of the data and because we have a multiple parts of the data it is possible technically to establish men in the middle attack when you remove one of the data and you submit your own data instead and in these case and if these data and these data is actually in our case each data item conveniently fit with each one key and if you submit your own part of the data is your own key you technically can create the attack that would allow you to get some insights into the arm results or weight how the work worker works so in this that is why we need integrity protection in additional the data protection yes sir but the the the hash and the payload that its hashing over comes in the single request right so presumably if there is a meaning in the middle can be calculated so I still dont see unless the hash and the pedal itself can come from different channels and throw different requests I can see the highest can be used for integrity checks otherwise so not clear what is the hash what function is higher serving so the hash here is not sure that I understand equation so technically request may be viewed as logically coming from multiple channels because okay what is multiple channels and thats thats useful yes any other questions okay so the future development is there a question okay so the I kind of like the listed the several areas that the we have to work on to make sure that the Avalon is truly useful and the enough enough capabilities to be used in the practical and the in real-world situations so one of them is very important is scalability in some core functionality and at least tip three the most important areas that comes to my mind so one of them is supporting the elastic water poles and doing that in the way that is a transparent chilly app developers when I seen transparent to the app developer percent means that its um from the gesture processing that is a single worker but from the CSP that has multiple workers so the request can be forwarded to the any of those any of those in claims for processing without other load in the particular nodes the we definitely do not want to avoid creating an orchestration engine and we want to utilize the one of the common orchestration engine and the kubernetes is the choice for the number one choice now and the finally the includes this specific judgment of ajax there is a new coming feature that will allow dynamic memory allocation and that is the feature that we want to implement again its kind of linked to the scalability because you can execute the workloads on the same physical node if you can allocate memory dynamically the important part is obviously security because the key of the what well focus on cone and there are three areas here we need to isolate work order execution from development and many mention that during the previous slide another one is the multi tenant support we want to make sure that requests workers we can create workers specific to the particular requesters so the work orders only from distantly Esther can be executed on these worker instance so that is the important request and the early usages for the trusted computers actually proved the decision valuable feature that was requested by the customers and the Icicle access to external data so not the date the data from inside of the trusted worker and we already partially working on that but we still need to do more to do this better in front and apis in usages so well work in currently on degrading to block chains and that is the theorem bass so that is the i exac is actually doing the most work here and the fabric and the fabric integration being done by IBM and the we need to work on the privacy preserving state management and there were already questions about that and we definitely to do that and there is the aspect that is the being developed as well and once this will finalize the arm the the approach in this pact we can start working on the implementation and obviously we want to work on the different vertical use cases you have dementing examples in the other one today we shall show how Avalon can be used in the like financial industry supply chain trusted tokens there is a good areas for us to look back and integration the different types over the preteen workers so we already have the minimal size trusted compute the in cliff is minimal resized trusted compute base but we will need to improve in enhance that and add additional capabilities to that important part is adding the I call this leap OS based development framework but in reality what we are talking here getting the frameworks that allow to use more traditional approaches and the languages the current year minimal size TC bill requires you to build your custom workload as the library this second bullet would allow to use more generic more common software development and examples that were already kind of looking into this way to transit execution environment Messer Gs forget Lea use the proper term and then the Griffon and we have a number others that is actually coming on the authorizer exist on the market I just recently came to the market and and also we always do we need to actively look for the partners who would start to neon code during the how community and would actually work on the other hardware trusted execution environment and the different completely different trusted computers like MPC Ock and in order for valent to be really useful we have to make sure we constantly have to look at the option how we can actually make improve the application developer support we at early stage of business so we do not have too much documentation and tutorials but we this is important area for us the work on we obviously need to explain the use cases portfolios so people would know the kind of like find the starting point for the application development and our currently will have a single repository that pretty much includes everything and the important part at some point to split the cork abalone from the SDKs repositories and I would mainly development much easier and what a lot of people who wants to contribute to core continue to the core and those who cares about the application developers would make them kind of like isolated from the changes to the for any question Introduction to Hyperledger Avalon. Presented by Eugene Yarmosh to Hyperledger Avalon Developers Forum, December 3, 2019. - Introduction - Use Cases - Architecture - Blockchain and Direct Connections - Processing Flow - Confidentiality and Integrity - Future Development HILLSBORO,