liangran How is a larger memo going to help them? He said that only way this can work is a larger memo + encryption. Do you have any proposal how to encrypt the data inside the memo right now?

And what is the cost of customer support going to be? So many things can go wrong. How will refunds work? What happens if he receives transactions without a memo and has no way of contacting customers. I really can't take this hypothetical use case serious.

The core issue as I see it is this: To what extent should stellar accommodate on-chain data storage? Right now we’re very conservative, but I can certainly see a situation where we add larger memo types. I myself would love to see native multihash support.

On-chain data, whether it be in memos to annotate transactions or or in data values associated with an account, has a cost associated with it: Larger memos in the transaction will slow transaction application and will expand the size of the txhistory table and the size of the history archives. Likewise, larger account data values put pressure on the accountdata table. Here’s the key takeaway IMO: scalability questions such as these will always be about balancing tensions and I don’t think simply looking to what other systems are doing will help us to answer the question… they have different systems with different goals and constraints. Simply saying “Factom does 10k” isn’t valid to me.

Perhaps we can push the discussion along by discussing the three potential stances our design can take with respect to on-chain user customizable data fields:

  1. Minimize size to increase throughput while still enabling links to off-chain data
  2. Maximize size to increase the utility of on-chain data
  3. Balance size and throughput to enable both on-chain utility (beyond simply storing off-chain links)

Our present stance is number 1, and I personally believe it is correct. I think that data values and fields should only expand to better support links to off-chain data, such as larger hashes. Larger transactions will beget larger transaction fees and since we’re aiming to increase financial access I think we should make sure to keep these as low as possible.

Maximizing size for utility (Stance 2) is a slippery slope, as mentioned. First it’s 10k for some JSON, then it’s a couple megs for a PDF, then people want to deliver video files via the blockchain. IMO it will lead to constant discussions about expanding the size as people come up with new and clever ways in how to abuse the ledger as a general DB. As hackers, we’re always trying to stretch limits and avoid writing any code that we don’t have to. Stance 2 is simply too easy to abuse, IMO.

Stance 3 is the toughest to navigate: What is the right size to balance the concerns involved? What metrics should we use to decide what is too large or what is too small? How many operations per second are enough? Do we expand our sizes in concert with increases in hardware and network performance?


Personally, I think we should enable easy off-chain data retrieval via horizon or some other ecosystem service outside of stellar-core. This will allow us to keep Stance 1, optimizing for scalability and throughput on-chain while allowing ease of integration with off-chain data. It’ll also let us avoid revisiting this discussion every N months in perpetuity; Choosing to support larger hash values is a much simpler discussion than deciding on how much JSON is too much JSON.

All this is just IMO, of course.

liangran more than anything else in this thread I liked reference to SAP HANA.
Amazon had to invent new types of instances with terabytes of memory to allow people SAP HANA installations
https://aws.amazon.com/sap/solutions/saphana/

I'm sure we don't want to run stellar nodes on instances which cost $6000/month.
Agree with other guys, memo or data fields may be extended for larger hashes, but not for custom data. Data should be stored off-chain.

I don't have high level IT skills. To me if larger memo + encryption available, stellar can apply to my business right now.
I don't know how to deploy a federation server. Even though i can find someone can do this, the cost will be high, the most import is whether he is believable enough.
Maybe i can wait, several years later there will have many companies supply all kinds of easy-to-use tools.

    With regard to the reference to Corda, their documents say the following:

    5.4 Attachments and contract bytecodes
    Transactions may have a number of attachments, identified by the hash of the
    file. Attachments are stored and transmitted separately to transaction data and
    are fetched by the standard resolution flow only when the attachment has not
    previously been seen before.
    https://docs.corda.net/_static/corda-technical-whitepaper.pdf

    On https://docs.corda.net/releases/release-M7.0/tutorial-attachments.html, Corda talks about how attachments are being used, with the expectation that the node originating the attachment be able to serve the attachment to any other node, and that the requesting node maintaining a cache of the attachments. The attachment hash is the only content stored in the trasnaction. And it is up to the smart contract to manage it's own access to the attachment.

    I don't see how any of that is any different from what has already been described. I've actually been playing with core, horizon, bridge, compliance, federation on a private setup here at home. Most of the work I've been doing has been trying to determine the best way to store information about images and digital documents (content, metadata, audit trails) on a distributed ledger. While I originally thought the ledger was going to be some kind of magical place that would just take whatever I was throwing at it, storing the content in a real content management system with extra metadata on the document to catch the transacation and hash information is more than enough for my needs.

    I've been doing content management for government, insurance, shipping and manufacturing clients for about 20 years. I am managing systems that are subject to audit by agencies of multiple goverments. Honestly, most of the systems I've managed there are multiple queries to get a document, the metadata and the audit trail history. And all of these systems have a seperate subsystem for routing documents through whatever business process or use case the document supports. And there are a lot "standard use cases" where it almost doesn't matter what industry you're supporting.

    I don't really see a valid use case behind having a user's asset storage wallet have to do the work of constructing an order for someone's store. It sounds like the expectation here is that work of constructing a valid order be pushed into the wallet. And the datastore for the whole thing be moved from the organization running the store, anchor or gateway to the ledger.

    Am I mistaken in reading the original intent for the destination sysytem to only have to capture and store the transaction information and then have to go retrieve the information from ledger? Or are you talking about a custom app sitting on a stream from horizon? How many inbound transactions will either of those cases be able to support without some kind of queuing when it has to decrypt or parse the content as well as get to the back end system fulfilling the order or acting on the payload?

    There is another reason for a larger data. As we all know one difference between Ethereum and Stellar is the VM. We can consider the VM a state machine. Stellar has defined many operation codes. If we have large data, user could defined their own op codes and put the codes in the tx. This can help them to create a strong contracts. This requirement is proposed by one senior developer from an exchange.

    10k may be too large, but the current size too small. Even twitter can have 140 size. If we can have 256 characters, that will be much better. On the other side, most tx will not have memo, it would not increase the ledger too much. We would gain more benefit than the cost.

      DavidLee If you explain exactly what you're looking for we can advice better. Maybe those tools already exist or we can make something for you

      liangran I've been doing some thining about this too, but from a different perspective building on what we alredy have with bridge, compliance and federation.
      Why can't we come up with a scheme where code can be be given an account and using federation, toml or account data to find where the code is located and the inputs that it needs to run? And what it might cost to run that code. It's obviously not a dao, at least not the same way some other platforms do it, but it is almost doable today.

      The attachment convention could be used to pass messages if something more is needed (I remember some early scheme in using SMTP to pas SOAP messages, so anything is possible). Or the data or toml could point to an endpoint for a standard communication protocol. And all of this is off ledger. It wouldn't have to be on ledger till value was exchanged.
      I know I'm not describing an always-on, unstoppable worldwide computer trustlessly running code on random anonymous nodes somwehre in the ether, but I'm also describing someting that could be built on Stellar today.
      And there could actually be a business here possibly with a custom asset being used to pay for the scheme.
      I'm not saying this scheme can do what Ethereum or Hyperledger can do, but it could create a market for compute resources that could achieve an acceptable result.
      Aren't we already asking anchors and gateways to do some of this today? And I don't see the need to do anything more than pass hashes or some other kind of identifier on the ledger.

      I will admit I am still working through this in my head. So, I'm probably missing something here, nor do I necessarily think it would be easy. But I see it so that a lot of the parts are already here.

        Butch I proposed the requirement because it is the real requirement from the companies in China. Yes, we can use federation, toml or other service. But what we need is some space on-chain.

        In summary, I think increase the memo/data a bit would not increase the ledger too much but can make Stellar much more powerful. Once hundreds of companies use Stellar in their POC or production systems, Stellar will become a well known fin-tech chain. Do something to meet the requirements and Stellar can grow faster.

          liangran For my benefit, can you describe the origin of the requirement? Not that it changes anything; a requirement that can't be met is always a problem is a show stopper.
          Is this coming from government mandate, evolving industry standard or individual companies trying to interpret the direction they see products or platforms moving?
          I will admit that my understanding of the requirements in China a washed through several layers of client representatives, and is very limited in scope.

            Butch User cases,

            1. In supply chain, I need to put the tax invoice with some key info like company name in the memo. It is a common case in China not only in supply chain management. A team in a travel agency also have the same requirement.

            2. A exchange called Julang, want to put some op_code in the memo. So it can connect to their high performance trading system.

              Looking at these two different cases separately.

              In supply chain, I need to put the tax invoice with some key info like company name in the memo. It is a common case in China not only in supply chain management. A team in a travel agency also have the same requirement.

              Is the data going into the memo for the benefit of the sender or the receiver?

              What you're describing is familiar. Orders and payments move through systems; the the client information, the vendor information in the procurement system, the purchase order number, the invoice number, the payment's check number and depending on the jurisdiction, the tax information has to be tracked and linked together so the accounting systems, inventory system, and manufacturing execution system of the multiple companies and governments involved can keep track of what's owned and to whom.

              Is what you're describing putting the data structures associated with the value transfer / ledger manipulation that is meant for the non-ledger systems into the memo field for storage in the ledger. So the requirement is data storage and message passing of arbitrary data. Is this something we should expect to put in a generalized wallet application? Should any Stellar client be able to encode the appropriate data strcuture and decode the data anohter client has placed on the ledger?

              I will say I expect that our wallet applications should be able to utilize many of the features defined by SDF, but this message passing and handling and the data storage seems like a little much for an application on a phone someone uses to check their balances and pay for coffee. Or is the expectation going to be that this would be a custom application with access to a funding wallet?

              Am I missing anything in this use case?

              A exchange called Julang, want to put some op_code in the memo. So it can connect to their high performance trading system.

              I have no experience with high performance trading systems, so please let me know if anything I'm saying doesn't match up.

              The intent here seems to be about messaging--sending instructions to the trading system. Will all the op-codes be assoicated with a value transfer or ledger manipulation on Stellar? Or will there be any instances where an op-code is sent into the trading system that doesn't involve changing a balance of some asset, control of an account or the properties of account on Stellar? Is the communication channel one-way or two-way?

              As an exchange, are they "living out of Stellar" or is the integration more them being a gateway between Stellar and their trading platform?
              And again, will participating in this traffic be something that's expected of any random wallet?

              2 months later

              liangran Hi quick question: is there a way to Deposit/withdraw CNY via Ripple using stellar?
              I read a blog that you mentioned something about it.
              Actually you wrote this:
              RippleFox is an anchor in China. We issue CNY and educate people to use Stellar. People can deposit/withdraw CNY using alipay/bank card.

              We will do more in the coming days.
              1. Deposit/withdraw XRP
              2. Deposit/withdraw CNY via Ripple

              Can you please guide in how to Deposit/withdraw CNY via Ripple using stellar?

              Thanks

              a year later

              Now we use IPFS for managing some attachment in Stellar private network and problem is that our file address in IPFS is 48 Bytes. really 32 Bytes is too short for memo in stellar and we suggest to increase it to 1KB or keep it in TLV format.
              Really we need to save some evidence on Ledger and off-chain information ( Compliance Protocol and Federation and ....) is not referable.

              Thanks

                msamadi

                Raw IPFS CIDs are 32 bytes. Don't store the multi-format address, store the SHA-256 hash directly.

                  dzham
                  We have our Header and salting parameter on SHA-256 has, so we need at least 48 Bytes. So we have two options now:
                  1- Make lookup table outside and keep SHA-256 of our String in MEMO
                  2- Apply this change directly on our fork.
                  also if we need to attach more than a file in one transaction, we couldn't handle it now, So I suggest to use TLV format and define new MEMO type for it.

                  Regards,
                  Samadi

                    sbsends
                    Generally is good Idea, but why you limited to 64 Bytes ? I suggest to extend it as TLV format and define new memo type as MEMO_TLV with maximum 1K size.

                    Regards,
                    Samadi

                    msamadi also if we need to attach more than a file in one transaction, we couldn't handle it now, So I suggest to use TLV format and define new MEMO type for it.

                    You can create a directory with all the files in, and send the CID of the directory.

                      dzham
                      It's good Idea, but force us to manage our files in specific structure that I don't like it.