I have talked with many teams from several companies about what Stellar can do this year. They all have the similar requirement to attach some text/data in the transaction. They would like to do some POC to check want blockchain or distributed ledger can do. Most of them choose to use ethereum even they do not need a EVM. So it is time Stellar learn from other chain.

Factom can attach 10k size content.

Bitshare do not limit the size in the memo. And the memo have another key to make it private.

Corda can attach a zip file. It could be a file, picture or a script file which can run in the VM.

Of course Stellar can attach a hash. Just like "640k ram is enough for anybody". It is just a workaround. The larger size of the memo will increase the size of ledger, but the benefit would be more.

SDF can still limit the size of memo/data, just increase the size a bit(1k or more). It could meet much more requirements and much more companies would adopt Stellar.

    liangran

    Attaching hashes is not a workaround. It's an elegant solution to keep the transaction throughput high with the possibility to connect arbitrary big data blobs to the transaction. Just take a look at the compliance protocol, it allows participants to attach as much data as they need about the involved parties to transactions.

    I would say that taking the easier way and allowing anyone to attach 10k size content to transactions is a workaround. I don't know about the specific requirements of this companies you are talking about, but attachments should cover 99% of all use cases.

    That is why I say Stellar should learn from other chain. It is not elegant, you are asking your customer to do another action to retrieve the data from other channel. Just like read a index in RAM and then read from hard disk.

    It is a common requirements people need to attach some data in the tx. The requirements is from the users who want to use Stellar. The companies include banks, supply chain companiues, insurance company etc. Read the document of Corda you will find the requirements.

    I do not expect a larger size could cover 99% cases. But a larger size memo can cover more. If it is really big, it could use the "elegant" way.

      liangran

      Just like read a index in RAM and then read from hard disk.

      This is how every database works today. Nobody complains about this because it's abstracted behind software. Developing your system with this small additional requirement doesn't change much. If you are able to retrieve the data from the Stellar database you can do it from any other database too.

      If this is a serious deal-breaking requirement maybe Stellar is not a good fit for your use case.

      Corda is a distributed ledger platform designed to record, manage and automate legal agreements between business partners.

      Stellar is a platform that connects banks, payments systems, and people. Integrate to move money quickly, reliably, and at almost no cost.

      No, check the in-memory database like SAP HANA, you will find it is a revolution compare to the classic database. If we can find the data in one action, why need two? Just because it is "elegant"?

      The requirement of Corda is proposed by several financial institution, it could be a good reference. For example, if we could put some short scripts in the tx, it could perfect the smart contract feature.

      The most important thing is to meet the requirements of users. That is why I say Stellar should learn from others. Stellar could and should be adopt to other cases. Not only the public chain, but also in private chain in different companies.

        Yes, the problem is that later someone will want to upload 5MB pdf contracts, so your workaround isn't enough.
        The hash is the way to go, you can upload the file to any service or your own service, and save the hash in the transaction.
        You can later use the hash to lookup at the service for the file.
        I would be easy to implement a service where you can pay to store data using stellars, in the end, is similar to the federation concept.

        I totally understand what you said. I am the operator of RippleFox anchor. I use the federation protocol to process the withdraw data. The tx on Stellar has a index and the data stored in a separate database.

        I agree 5M file should not put on the ledger. What I mean is 64B is not enough. 64~5M is a range, I am sure we can find a better value than 64. If user want to put more data than the size, they can use the real workaround way.

        liangran, factom really doesn't store all this data in the blockchain. They are just storing a hash of the data also. But what they do have is an API that wraps this. So to someone building on factom it is just seen as one operation.

        I think the way to address this is to do something similar. Just the hash will be stored in stellar but there is some nice API that pulls the larger object from some standard place.

        Would be great to have some concrete proposals started here: https://github.com/stellar/stellar-protocol

        I have confirmed with the Factom employee, factom can write up to 10k. In most cases people only write a hash.

        In the supply chain case, some time the tx need to attach a invoice id and the company name. Another case is a exchange, they would like to attach some short script to tx, which is used for smart contract. In these cases, data is not large and use another chain like IPFS would be too expensive.

        I am also a blockchain trainer in China, so I wish Stellar can meet more requirements in these cases. In China, more and more companies would like to see such a feature.

        Yup, a proposal would be great.

        As a stellar user, i agree with what liangran said. Usually we need to attach some text/data in the transaction.
        For now, Memo-text allow not more than 28 bytes, I think many applications would be blocked outside of stellar.

        For example, i am a vendor, i want to show my products on facebook, weibo, wechat, markup the product code and price.
        If Memo-text can contain more words. Buyer can pay by stellar and tell me product code, quantity, recipient and address in memo-text.

        thus i can do business without online shop, without shopping cart. it is so easy.

        By the way, not everyone want to show the memo-text to the world. In the example above, personal information is private, it should not public.
        i think encrypt message is necessary.

        If possible, 1024 bytes at least + optional encrypt is better.

          DavidLee The memo field is too low level for this use case. You still need to send your users a random not human readable string and teach them how to use the right memo. We are doing all this cryptic bitcoin-like stuff again. Why? Customers should get a much friendlier user experience on Stellar. We have the tech.

          And as you mentioned, you can't have this sensitive customers information about what they are buying in the memo. So you need again another encryption layer on top of it to make it more complicated.

          A simpler solution would be to use the existing stellar federation protocol. Your users would send a payment to e.g. product=viagra|quantity=1kg|recipient=FirstName LastName@Address...*yourshop.com. It's only one field and most of the popular Stellar wallets support it. You get additional flexibility as you can reject a payment in your federation server(e.g. you don't want to send your product outside of specific countries). On the federation backend you store the data of the customer (nobody else has access to it), hash it and make the hash part of the actual transaction.

          In this example the whole complexity was hidden away from customers. They don't even need to know that the Stellar Protocol was used underneath.

          If you spend some time on the Slack and here you will see that there are many people that mess up the deposit to exchanges because they don't know how to correctly deal with memo. A deposit to an exchange could be simple as sending Lumens to username*exchange.com, but we need to start embracing the federation protocol.

          DavidLee's requirements is also a common case. Could the federation protocol could resolve his requirement? Yes, but it increase the cost for him. He maybe operate a small online shop. It is not easy for him to set up such service. And his customer may not use a client which support the federation protocol.

          A larger memo could help a ordinary people who lack IT resources to resolve their cases.

            liangran How is a larger memo going to help them? He said that only way this can work is a larger memo + encryption. Do you have any proposal how to encrypt the data inside the memo right now?

            And what is the cost of customer support going to be? So many things can go wrong. How will refunds work? What happens if he receives transactions without a memo and has no way of contacting customers. I really can't take this hypothetical use case serious.

            The core issue as I see it is this: To what extent should stellar accommodate on-chain data storage? Right now we’re very conservative, but I can certainly see a situation where we add larger memo types. I myself would love to see native multihash support.

            On-chain data, whether it be in memos to annotate transactions or or in data values associated with an account, has a cost associated with it: Larger memos in the transaction will slow transaction application and will expand the size of the txhistory table and the size of the history archives. Likewise, larger account data values put pressure on the accountdata table. Here’s the key takeaway IMO: scalability questions such as these will always be about balancing tensions and I don’t think simply looking to what other systems are doing will help us to answer the question… they have different systems with different goals and constraints. Simply saying “Factom does 10k” isn’t valid to me.

            Perhaps we can push the discussion along by discussing the three potential stances our design can take with respect to on-chain user customizable data fields:

            1. Minimize size to increase throughput while still enabling links to off-chain data
            2. Maximize size to increase the utility of on-chain data
            3. Balance size and throughput to enable both on-chain utility (beyond simply storing off-chain links)

            Our present stance is number 1, and I personally believe it is correct. I think that data values and fields should only expand to better support links to off-chain data, such as larger hashes. Larger transactions will beget larger transaction fees and since we’re aiming to increase financial access I think we should make sure to keep these as low as possible.

            Maximizing size for utility (Stance 2) is a slippery slope, as mentioned. First it’s 10k for some JSON, then it’s a couple megs for a PDF, then people want to deliver video files via the blockchain. IMO it will lead to constant discussions about expanding the size as people come up with new and clever ways in how to abuse the ledger as a general DB. As hackers, we’re always trying to stretch limits and avoid writing any code that we don’t have to. Stance 2 is simply too easy to abuse, IMO.

            Stance 3 is the toughest to navigate: What is the right size to balance the concerns involved? What metrics should we use to decide what is too large or what is too small? How many operations per second are enough? Do we expand our sizes in concert with increases in hardware and network performance?


            Personally, I think we should enable easy off-chain data retrieval via horizon or some other ecosystem service outside of stellar-core. This will allow us to keep Stance 1, optimizing for scalability and throughput on-chain while allowing ease of integration with off-chain data. It’ll also let us avoid revisiting this discussion every N months in perpetuity; Choosing to support larger hash values is a much simpler discussion than deciding on how much JSON is too much JSON.

            All this is just IMO, of course.

            liangran more than anything else in this thread I liked reference to SAP HANA.
            Amazon had to invent new types of instances with terabytes of memory to allow people SAP HANA installations
            https://aws.amazon.com/sap/solutions/saphana/

            I'm sure we don't want to run stellar nodes on instances which cost $6000/month.
            Agree with other guys, memo or data fields may be extended for larger hashes, but not for custom data. Data should be stored off-chain.

            I don't have high level IT skills. To me if larger memo + encryption available, stellar can apply to my business right now.
            I don't know how to deploy a federation server. Even though i can find someone can do this, the cost will be high, the most import is whether he is believable enough.
            Maybe i can wait, several years later there will have many companies supply all kinds of easy-to-use tools.

              With regard to the reference to Corda, their documents say the following:

              5.4 Attachments and contract bytecodes
              Transactions may have a number of attachments, identified by the hash of the
              file. Attachments are stored and transmitted separately to transaction data and
              are fetched by the standard resolution flow only when the attachment has not
              previously been seen before.
              https://docs.corda.net/_static/corda-technical-whitepaper.pdf

              On https://docs.corda.net/releases/release-M7.0/tutorial-attachments.html, Corda talks about how attachments are being used, with the expectation that the node originating the attachment be able to serve the attachment to any other node, and that the requesting node maintaining a cache of the attachments. The attachment hash is the only content stored in the trasnaction. And it is up to the smart contract to manage it's own access to the attachment.

              I don't see how any of that is any different from what has already been described. I've actually been playing with core, horizon, bridge, compliance, federation on a private setup here at home. Most of the work I've been doing has been trying to determine the best way to store information about images and digital documents (content, metadata, audit trails) on a distributed ledger. While I originally thought the ledger was going to be some kind of magical place that would just take whatever I was throwing at it, storing the content in a real content management system with extra metadata on the document to catch the transacation and hash information is more than enough for my needs.

              I've been doing content management for government, insurance, shipping and manufacturing clients for about 20 years. I am managing systems that are subject to audit by agencies of multiple goverments. Honestly, most of the systems I've managed there are multiple queries to get a document, the metadata and the audit trail history. And all of these systems have a seperate subsystem for routing documents through whatever business process or use case the document supports. And there are a lot "standard use cases" where it almost doesn't matter what industry you're supporting.

              I don't really see a valid use case behind having a user's asset storage wallet have to do the work of constructing an order for someone's store. It sounds like the expectation here is that work of constructing a valid order be pushed into the wallet. And the datastore for the whole thing be moved from the organization running the store, anchor or gateway to the ledger.

              Am I mistaken in reading the original intent for the destination sysytem to only have to capture and store the transaction information and then have to go retrieve the information from ledger? Or are you talking about a custom app sitting on a stream from horizon? How many inbound transactions will either of those cases be able to support without some kind of queuing when it has to decrypt or parse the content as well as get to the back end system fulfilling the order or acting on the payload?

              There is another reason for a larger data. As we all know one difference between Ethereum and Stellar is the VM. We can consider the VM a state machine. Stellar has defined many operation codes. If we have large data, user could defined their own op codes and put the codes in the tx. This can help them to create a strong contracts. This requirement is proposed by one senior developer from an exchange.

              10k may be too large, but the current size too small. Even twitter can have 140 size. If we can have 256 characters, that will be much better. On the other side, most tx will not have memo, it would not increase the ledger too much. We would gain more benefit than the cost.

                DavidLee If you explain exactly what you're looking for we can advice better. Maybe those tools already exist or we can make something for you

                liangran I've been doing some thining about this too, but from a different perspective building on what we alredy have with bridge, compliance and federation.
                Why can't we come up with a scheme where code can be be given an account and using federation, toml or account data to find where the code is located and the inputs that it needs to run? And what it might cost to run that code. It's obviously not a dao, at least not the same way some other platforms do it, but it is almost doable today.

                The attachment convention could be used to pass messages if something more is needed (I remember some early scheme in using SMTP to pas SOAP messages, so anything is possible). Or the data or toml could point to an endpoint for a standard communication protocol. And all of this is off ledger. It wouldn't have to be on ledger till value was exchanged.
                I know I'm not describing an always-on, unstoppable worldwide computer trustlessly running code on random anonymous nodes somwehre in the ether, but I'm also describing someting that could be built on Stellar today.
                And there could actually be a business here possibly with a custom asset being used to pay for the scheme.
                I'm not saying this scheme can do what Ethereum or Hyperledger can do, but it could create a market for compute resources that could achieve an acceptable result.
                Aren't we already asking anchors and gateways to do some of this today? And I don't see the need to do anything more than pass hashes or some other kind of identifier on the ledger.

                I will admit I am still working through this in my head. So, I'm probably missing something here, nor do I necessarily think it would be easy. But I see it so that a lot of the parts are already here.