https://www.interviewbit.com/problems/design-messenger/
A: Send Message : Things to take care of in this API
A: This API would be called if I need to show a page of conversations/threads ( Think of the view you see when you open the Whatsapp / Messenger app ).
At a time, only a certain number of conversations will be in the viewport ( let's say 20 ). Let's call it a page of conversations. For a user, we would only want to fetch a page of conversations at a time.
Gotcha: Would the page size remain constant across different situations?
Probably not. The page size would be different across clients based on screen size and resolution. For example, a mobile’s page size might be lower than that of a web browser’s.
where ConversationResult has the following fields :
Q: How would the API for fetching most recent messages in a conversation look like?
A: NoSQL databases are inefficient for joins or handling relations. As such, NoSQL databases store everything in a denormalized fashion. In this case, we do have relations like
Q: How much data would we have to store?
A: If the size of the data is so small that it fits on a single machine’s main memory, SQL is a clear winner. SQL on a single machine has next to zero maintenance overhead and has great performance with right index built. If your index can fit into RAM, its best to go with a SQL solution. In our earlier estimations, we had already established that we will need to provision petabytes of data which most definitely does not fit on a single machine.
So, a SQL solution will have a sharding overhead. Most NoSQL solutions however are built with the assumption that the data does not fit on a single machine and hence have sharding builtin. NoSQL wins on this parameter.
Q: What is the read-write pattern?
A: Messaging is going to be very write heavy. Unlike photos or tweets or posts which are written once and then consumed a lot of times by a lot of people, messaging is written once and consumed by the other participant once.
For a write heavy system with a lot of data, RDBMS usually don’t perform well. Every write is not just an append to a table but also an update to multiple index which might require locking and hence might interfere with reads and other writes.
However, there are NoSQL DBs like HBase where writes are cheaper. NoSQL seems to win here.
A: Things to consider :
Q: How would we store the data? Discuss schema
http://blog.gainlo.co/index.php/2016/04/19/design-facebook-chat-function/
Design a chat server | Hello World
Explain how you would design a chat server. In particular, provide details about the various backend components, classes, and methods. What would be the hardest problems to solve?
Lets try to start with analyzing the requirements, what would we need for this chat server?
1.We need a server that coordinate the chat activity. The server need to be running at all times.
2.The server need to support a small (large) number of clients.
3.The server need to maintain a list of the clients, each clients will have a status showing it is online, offline or away.
4. Each client will have a contact list that list all his/her contact. lets say that the contact must be mutual, which means if client A is a contact of client B, then client B must also be a contact of client A.
5. Lets keep things simple, only two clients that are in each other's list and both of them are online can send messages to each other. The message could be send in XML format which is good for both human and machine to read.
Then lets try a typical user case see if we have everything covered. Here are the steps in this typical user case
1. Client A signs online, changes his/her status.
2. Client B signs online, changes his/her status.
3. Client A send a request to B asking to be added to B's contact list.
4. Client B approve the request, now A and B are in each other's contact list.
5. Client A and B send each other messages, messages delivered.
6. Client B left the chat and sign off status of client B change.
7. Client A sign off.
Here are some classes that will be needed in our application.
Here are some problem that we might need to consider.
1. How would we know that a client is online? I mean really really online.
The user connection might be dead for some reason, so we might want to pin the user periodically if they are online.
2. What if there are conflict between the data stored in the client machine and the server machine?
If the data are out of sync, we need to have a mechanism to decide which one is right. For example, the status we should probably sync on the local machine while the contact list we might want to sync on the server.
3. What if there are many users, say millions of them?
We might want to use distribute the application in many servers, and syncing will be a problem.
4. How to prevent the DOS attacks, we need to think about the limit the frequency that each user can send messages or request to server.
http://www.shuatiblog.com/blog/2014/08/24/design-online-chat-server-1/
http://www.shuatiblog.com/blog/2014/08/24/design-online-chat-server-2/
Design a scalable chat application on phone.
Parental Control - Parents being able to read the chat.
What about scalability ? How could we solve a problem when we have a lot of friends (I am a celebrity) and they update their chat status very often ( offline, online, visible, unvisible). Is it worth to transfer all their status to me via opened websockets, when they change their status very often?
Actually I think that it is impossible to convey to the user all friends information. We have to split friends on circles, which has a rating , and we will update the friends status of the first circle very often. These are friends, with which user has a pernament communication and cares about them the most. The other users could be updated on longer period with some delay. When i log on the system, i will pull all friends infromation. Ofcource we have to store all users interaction in some memory distributed cache.
One question that came to my mind. what happens when we send a message to an user who is offline. Is it possible to store this message in some distributed event bus, or everything will be stored locally on the phone in sql database or preffered settings and when user is online , there will be initiatiated a peer to peer connection between our mobile phone and the user and everything will be pushed
I would store the message in the server side database. When the receiving user comes online, I will query the database for any pending offline messages and deliver them to the user immediately
http://shirleyisnotageek.blogspot.com/2015/03/design-chat-server.html
可以稳定,准确,长时间的进行通信(不同用户会有不同的使用习惯。长时间聊天;长时间idle;长时间离线)
- Trift: https://code.facebook.com/posts/1468950976659943/under-the-hood-building-and-open-sourcing-fbthrift/
- Long-running concurrent requests:/ long opened request: http://symcbean.blogspot.com/2010/02/php-and-long-running-processes.html
Apache:
Channel:
Partition:
Dark Launch/test
QQ scale
the model of chat server
the scale of Facebook chat
Vanilla Web
Distributed Design
Replicated not partitioned
distributed hash table
http://www.cnblogs.com/samuelchoi/archive/2012/03/31/2427424.html
https://tianrunhe.wordpress.com/2012/03/21/design-an-online-chat-server/
聊天室
也是一样,我就先做做思考,希望能够通过讨论来深化这个系统设计题目和不同设计的对比。
为了能够支持2-1000个用户同时上线聊天,比如youtube上的同时观看直播视频时候的chat,我们有下面的components:
app有一定的记忆,每次建立connection的时候,需要报告自己最后的消息ID(消息ID由server颁发,是个严格上升的sequential number)。这样server会返回从那个消息ID以后你没有收到的messages。load balancer 通过consistent hashing来找到chat server,并且建立联系。
chat server如果新建,那么需要从关系型数据库里得到储存媒体的信息,要么建立新的文档,要么读取过去的文档
chat server如果发现所有的connection都断掉了。说明无人在线,或者是因为chat server自己掉线,那么就应该果断关闭自己的chat room。这样就避免了多个进程同时修改储存媒体的可能。
chat server应该偶尔向每个用户发送 你还在吗?信息。如果无法送达,那么就mark该用户掉线。
当用户发送一条消息的时候,chat server会把该消息写入储存媒体,并且向其它在线用户逐个发布这条新的消息。附加的是该消息的ID。
有些从用户发送出得信息是不需要储存的。比如“马冬梅正在敲字”,这个信息需要向其它在线用户发布但是不需要写入储存媒体
- Q: What is the scale that we are looking at?
A: Let's assume the scale of Facebook Messages. Let's say we need to hand around 10B message sends a day and around 300M users.
- Q: Do we only need to support 1:1 conversations or group conversations as well?
A: Let's assume we are building things just for 1:1 conversations. We will extend it to group conversations if need be. - Q: Do we need to support attachments?
A: For now, let's assume that we don’t. We will only look at building plain-text messaging system. - Q: What is a reasonable limit to the size of a message?
A: Let's assume that we are building a chat messaging system. As such, we would expect every message to be shorter in length. We can impose a limit here on the maximum size of such a message. Let's say we will only handle messages less than 64Kb in size and reject the others. - Q: What about the notification system for new messages received?
A: Considering the size of the discussion here ( 45 mins in the interview ), we will not delve into the notification system for messages.
Estimation:
This is usually the second part of a design interview, coming up with the estimated numbers of how scalable our system should be. Important parameters to remember for this section is the number of queries per second and the data which the system will be required to handle.
Try to spend around 5 minutes for this section in the interview.
Let's estimate the volume of each. Assume that our system would be the one of the most popular messaging service.
Q: Given the number of messages being sent, what is the amount of message sent data size we are generating everyday?
A: Number of message sends : 10B
Assuming each message on average has 160 characters , that results in 10B * 160 = 1.6TB assuming no message metadata.
Assuming each message on average has 160 characters , that results in 10B * 160 = 1.6TB assuming no message metadata.
Q: What is the expected storage size?
A: From the previous section, we know that we generate 1.6TB data everyday if we only store one copy of the message. If we were to provision for 10 years, we are looking at 1.6 * 365 * 10 TB which is approximately 6 Petabytes.
Design Goals:
Q: Is Latency a very important metric for us?
A: Yes. Chat is supposed to be realtime, and hence the end to end time actually matters.
Q: How important is Consistency for us?
A: Definitely, yes. Its not okay if someone sends me a sequence of message and I don’t see some of them. That could lead to huge confusion. Think of cases when you miss an emergency message or missed messages cause misunderstanding between individuals.
Q: How important is Availability for us?
A: Availability is good to have. If we had to choose between consistency and availability, consistency wins.
Skeleton of the design:
The next step in most cases is to come up with the barebone design of your system, both in terms of API and the overall workflow of a read and write request. Workflow of read/write request here refers to specifying the important components and how they interact. Try to spend around 5 minutes for this section in the interview.
Important : Try to gather feedback from the interviewer here to indicate if you are headed in the right direction.
Q: What are the operations that we need to support?
A:
- Send a message to another person
- For a user, fetch the most recent conversations
- For every conversation, fetch the most recent messages
Q: What would the API look like for the client?
Q: How would the sendMessage API look like?A: Send Message : Things to take care of in this API
- sendMessage should be idempotent. If a client retries the message, the message should not be added twice. We can resolve this by generating a random timestamp based ID on the client which can be used to de-duplicate the same message being sent repeatedly.
- Ordering of messages should be maintained. If I send message A, and then send message B, then A should always appear before B. However, it is possible that due to delays, if two messages are sent quickly one after another, then the requests reach the DB out of order. How do we solve such a case? Obviously, we need to resolve based on the timestamp when they were sent at.
Timestamp on the client is always unreliable. So, we would need to record the timestamp the first time the request hits the servers ( Need not be a part of the API to the client )
sendMessage(senderId, recepientId, messageContent, clientMessageId)
A: This API would be called if I need to show a page of conversations/threads ( Think of the view you see when you open the Whatsapp / Messenger app ).
At a time, only a certain number of conversations will be in the viewport ( let's say 20 ). Let's call it a page of conversations. For a user, we would only want to fetch a page of conversations at a time.
Gotcha: Would the page size remain constant across different situations?
Probably not. The page size would be different across clients based on screen size and resolution. For example, a mobile’s page size might be lower than that of a web browser’s.
- Delta fetch: In most cases, our API calls will be made by users who are active on the site. As such, they already have a view of conversations till a certain timestamp and are only looking for updates after the timestamp ( which would typically be 0-2 more conversations ). For clients which are data sensitive (like mobile), fetching the whole page every time even when I have all of the conversations can be draining. So, we need to support a way of fetching only the updates when the lastFetchedTimestamp is closer to currentTimestamp.
ConversationResult fetchConversation(userId, pageNumber, pageSize, lastUpdatedTimestamp)
ConversationResult {
Conversation {
List(Conversation) conversations,
boolean isDeltaUpdate
}boolean isDeltaUpdate
Conversation {
conversationId,
participants,
snippet,
lastUpdatedTimestamp
}participants,
snippet,
lastUpdatedTimestamp
Q: How would the API for fetching most recent messages in a conversation look like?
Q: How would a typical write query look like?
A: Components:
- Client ( Mobile app / Browser, etc ) which calls sendMessage(senderId, recepientId, messageContent, clientMessageId)
- Application server which interprets the API call and calls DB to do the following:
- Puts in the serverTimestamp
- Figures out the conversation to which the message should be appended based on the other participant
- Figures out if a recent message exists with the clientMessageId
- Store the message
- Database server which stores the message.
Q: How would a typical read query look like?
A: Components:
- Client (Mobile app/Browser, etc ) which calls fetchConversation
- Application server which interprets the API call and queries the database for the top conversation.
- Database server which looks up the user’s conversations.
A: The simplest thing that could be done here is to have multiple application server. They do not store any data (stateless) and all of them behave the exact same way when up. So, if one of them goes down, we still have other application servers who would keep the site running.
A: We introduce load balancers. Load balancers are a set of machines (an order of magnitude lower in number) which track the set of application servers which are active ( not gone down ). Client can send request to any of the load balancers who then forward the request to one of the working application servers randomly.
A: If we have only one application server machine, our whole service would become unavailable. Machines will fail and so will network. So, we need to plan for those events. Multiple application server machines along with load balancer is the way to go.
A: NoSQL databases are inefficient for joins or handling relations. As such, NoSQL databases store everything in a denormalized fashion. In this case, we do have relations like
- user -> messages
- user -> conversations
- conversation -> messages
Q: How much data would we have to store?
A: If the size of the data is so small that it fits on a single machine’s main memory, SQL is a clear winner. SQL on a single machine has next to zero maintenance overhead and has great performance with right index built. If your index can fit into RAM, its best to go with a SQL solution. In our earlier estimations, we had already established that we will need to provision petabytes of data which most definitely does not fit on a single machine.
So, a SQL solution will have a sharding overhead. Most NoSQL solutions however are built with the assumption that the data does not fit on a single machine and hence have sharding builtin. NoSQL wins on this parameter.
Q: What is the read-write pattern?
A: Messaging is going to be very write heavy. Unlike photos or tweets or posts which are written once and then consumed a lot of times by a lot of people, messaging is written once and consumed by the other participant once.
For a write heavy system with a lot of data, RDBMS usually don’t perform well. Every write is not just an append to a table but also an update to multiple index which might require locking and hence might interfere with reads and other writes.
However, there are NoSQL DBs like HBase where writes are cheaper. NoSQL seems to win here.
A: Things to consider :
- Are joins required?
- Size of the DB
- Technology Maturity
Q: How would we store the data? Discuss schema
http://blog.gainlo.co/index.php/2016/04/19/design-facebook-chat-function/
Basically, one of the most common ways to build a messaging app is to have a chat server that acts as the core of the whole system. When a message comes, it won’t be sent to the receiver directly. Instead, it goes to the chat server and is stored there first. And then, based on the receiver’s status, the server may send the message immediately to him or send a push notification.
A more detailed flow works like this:
- User A wants to send message “Hello Gainlo” to user B. A first send the message to the chat server.
- The chat server receives the message and sends an acknowledgement back to A, meaning the message is received. Based on the product, the front end may display a single check mark in A’s UI.
- Case 1: if B is online and connected to the chat server, that’s great. The chat server just sends the message to B.
- Case 2: If B is not online, the chat server sends a push notification to B.
- B receives the message and sends back an acknowledgement to the chat server.
Real-time
The whole system can be costly and inefficient once it’s scaled to certain level. So any way we can optimize the system in order to support a huge amount of concurrent requests?
There are many approaches. One obvious cost here is that when delivering messages to the receiver, the chat server might need to spawn an OS process/thread, initialize HTTP (maybe other protocol) request and close connection at the end. In fact, this happens to every message. Even if we do the other way around that the receiver keeps requesting the server to check if there’s any new message, it’s still costly.
One solution is to use HTTP persistent connection. In a nutshell, receivers can make an HTTP GET request over a persistent connection that doesn’t return until the chat server provides any data back. Each request will be re-established when it’s timed out or interrupt. This approach provides a lot of advantages in terms of response time, throughput and cost.
If you want to know more about HTTP persistent connection, you can check things like BOSH.
Online notification
Another cool feature of Facebook chat is showing online friends. Although the feature seems to be simple at the first glance, it improves user experience tremendously and it’s definitely worth to discuss. If you are asked to design this feature, how would you do it?
Obviously, the most straightforward approach is that once a user is online, he sends a notification to all his friends. But how would you evaluate the cost of this?
When it’s at the peak time, we roughly need O(average number of friends * peak users) of requests, which can be a lot when there are millions of users. And this cost can be even more than the message cost itself. One idea to improve this is to reduce unnecessary requests. For instance, we can issue notification only when this user reloads a page or sends a message. In other words, we can limit the scope to only “very active users”. Or we won’t send notification until a user has been online for 5min. This solves the cases where a user shows online and immediately goes offline.
we can talk about what network protocol can be used in the connection. Also, how to deal with system error and replicate the data can be interesting as well since chat app is quite different.
Explain how you would design a chat server. In particular, provide details about the various backend components, classes, and methods. What would be the hardest problems to solve?
Lets try to start with analyzing the requirements, what would we need for this chat server?
1.We need a server that coordinate the chat activity. The server need to be running at all times.
2.The server need to support a small (large) number of clients.
3.The server need to maintain a list of the clients, each clients will have a status showing it is online, offline or away.
4. Each client will have a contact list that list all his/her contact. lets say that the contact must be mutual, which means if client A is a contact of client B, then client B must also be a contact of client A.
5. Lets keep things simple, only two clients that are in each other's list and both of them are online can send messages to each other. The message could be send in XML format which is good for both human and machine to read.
Then lets try a typical user case see if we have everything covered. Here are the steps in this typical user case
1. Client A signs online, changes his/her status.
2. Client B signs online, changes his/her status.
3. Client A send a request to B asking to be added to B's contact list.
4. Client B approve the request, now A and B are in each other's contact list.
5. Client A and B send each other messages, messages delivered.
6. Client B left the chat and sign off status of client B change.
7. Client A sign off.
Here are some classes that will be needed in our application.
1
2
3
4
| enum Status // 2, or 3 status. class User // which will contains the contact list as well as the request. class Request // request from one user to another to be each other's contact class Server // server class that runs on the server and perform clients register, and coordination of the different client |
1. How would we know that a client is online? I mean really really online.
The user connection might be dead for some reason, so we might want to pin the user periodically if they are online.
2. What if there are conflict between the data stored in the client machine and the server machine?
If the data are out of sync, we need to have a mechanism to decide which one is right. For example, the status we should probably sync on the local machine while the contact list we might want to sync on the server.
3. What if there are many users, say millions of them?
We might want to use distribute the application in many servers, and syncing will be a problem.
4. How to prevent the DOS attacks, we need to think about the limit the frequency that each user can send messages or request to server.
http://www.shuatiblog.com/blog/2014/08/24/design-online-chat-server-1/
http://www.shuatiblog.com/blog/2014/08/24/design-online-chat-server-2/
The system consists of a database, a set of clients, and a set of servers. This is not about OOD, but we need to know.
- DB stores user list, chat archive. An SQL DB would be good, unless we want BigTable for scalability purpose.
- We use XML for server-client communication. Because it’s debugging friendly.
- A set of servers.
- Data will be divided up across machines, requiring us to potentially hop from machine to machine.
- When possible, we will try to replicate some data across machines to minimize the lookups.
- One major design constraint here is to prevent having a single point of failure. For instance, if one machine controlled all the user sign-ins, then we’d cut off millions of users potentially if a single machine lost network connectivity.
Hardest problems
Or the most interesting questions.
Q1: How do we know if someone is online?
While we would like users to tell us when they sign off, we can’t know for sure. A user’s connection might have died, for example. To make sure that we know when a user has signed off, we might try regularly pinging the client to make sure it’s still there.
Q2: How do we deal with conflicting information?
We have some information stored in the computer’s memory and some in the database. What happens if they get out of sync? Which one is “right”?
Q3: How do we make our server scale?
While we designed out chat server without worrying—too much- about scalability, in real life this would be a concern. We’d need to split our data across many servers, which would increase our concern about out-of-sync data.
Q4: How we do prevent denial of service attacks?
Clients can push data to us —- what if they try to DOS (denial of service) us? How do we prevent that?
https://discuss.leetcode.com/topic/15/design-question-a-scalable-chat-application-on-phone-browsing/2Design a scalable chat application on phone.
Parental Control - Parents being able to read the chat.
For chat application, usually you want to maintain persistent network connection between the client and the server. We can use Socket to provide a bi-directional communication channel between a client and a server. This means the server can push messages to clients. That way, client don't have to poll the server frequently for changes which can be a burden especially when there are many connected clients at one time.
When a person sends a message in the chat application, the server will get it and push it to all other connected clients. Since this is a mobile app, a mobile app is just like a web client but with a different end point.
But are you gonna store the chat data on your servers?
Since most of the chat applications, whatsapp, wechat etc. they store the chat data on user's phone ex : SQLite for android.
Whenever user clicks on another user to chat, and starts to scroll to the top, it'll have to load the chat, and loading this data from server is a good thing?
Since most of the chat applications, whatsapp, wechat etc. they store the chat data on user's phone ex : SQLite for android.
Whenever user clicks on another user to chat, and starts to scroll to the top, it'll have to load the chat, and loading this data from server is a good thing?
If there are billions of users, the number of chat messages can be overwhelming and storing all of them on the server seems to require tons of storage. I know some chat applications do not store data on the server side, because your chat history is gone once you lost the phone.
So storing the chat data on the client side seems to make sense. Depending on the type of phone, there might be limitations on the client side, for example space/memory limitation. So if you chat a lot on the phone, the old messages might have to be purged (or uploaded to the server) to free up some space.
Usually what I noticed in the chat application you get shown the recent conversation. When you initiate a chat session, you don't have to load the entire chat history.
So, assume users don't care about chat history (aka lost your phone and your entire chat history is gone), we can store all chat data entirely on client side.
But according to the requirements parents need to be able to read the chat. Does the parent read the chat using the same client or from a different client? If it is the latter, the chat history need to be stored on server side too.
I've noticed that the chat get's synced every night at 3 AM on to Google Drive on WhatsApp. So in that case, even if the local data is lost, it can be reloaded. Using this approach, If the chat history is really tons of data for us to store, then we could guarantee the user that the history is available for a day, after which it synced on user's google drive and deleted from our servers. Again, this just another way for us to save storage space. On whatsapp, the sync happens from the phone, but this can be achieved at the application backend servers too, since user has given access the drive, and we have the data. This would avoid problems like client not being connected to the network or phone being turned off during the sync.
I'm not sure how exactly the interviewer wanted it to be designed when it comes to accounts. But I would say think that Parents have their own accounts, and would be checking the message from their own phone.
Interesting approach to use Google Drive as a backup. This saves them storage cost, so why notWhat about scalability ? How could we solve a problem when we have a lot of friends (I am a celebrity) and they update their chat status very often ( offline, online, visible, unvisible). Is it worth to transfer all their status to me via opened websockets, when they change their status very often?
Assuming you were to support group chat functionality with one million concurrent users, this will be a huge design challenge. Do you have any good ideas?
EDIT: Seems like maintaining one million web sockets is not an issue on a single server, according to this SO post: http://stackoverflow.com/a/17451928/490463
So yeah, I would still go with web sockets.
Actually I think that it is impossible to convey to the user all friends information. We have to split friends on circles, which has a rating , and we will update the friends status of the first circle very often. These are friends, with which user has a pernament communication and cares about them the most. The other users could be updated on longer period with some delay. When i log on the system, i will pull all friends infromation. Ofcource we have to store all users interaction in some memory distributed cache.
Designing a social network that scales is a totally different beast, and is worth a design discussion topic on its own.
This is a chat application on mobile. The main use case is to have a 1-1 conversation between two people
This is a valid requirement for chat apps. If you have lots of friends that's not a problem with websockets. It will only publish your status change to all your online friends.
You mean you have a giant friend list and many friends are changing their statuses frequently?
This is still fine I think. The server can do batching, so multiple statuses change are batched together in one push to the client. Saves phone's battery and also avoid communication overhead.
We can use an in-memory key value store such as Redis/Memcache to store online statuses. Since all data is in-memory, each query coming in would be efficient.
We maintain a simple mapping of user_id to status in Redis.
When a user gets online, query each user in the friend list and return the statuses back to the client (Query will be fast since it is in-memory).
When a user changes status, publish the status change to each of his/her online friends. The list of online friends can be sent via the payload or stored as a cache in the server if the list is too huge.
Yes, redis/memcache is distributed in-memory cache. You can scale linearly by just throwing more servers.
Plus, redis is able to persist data to disk in intervals, so when a machine restarts it can still load the data from disk.
One thing worth considering is people typically chat with friends who are from the same geographic region, so you probably want to store users from the same geographic region in the same data center.
For status updates, I think a single server alone can handle the load without issues. Assuming you have 1 million concurrent users, and:
- 50,000 users (5%) change their status every second
- There are 4 different statuses (each status can be represented by 2 bytes):
- Online
- Offline
- Away
- Busy
- On average a user has 10 online friends.
The estimated upload bandwidth per second is:
50,000 * 10 * 2 bytes = 100 KB/s
According to the above estimation, the bandwidth doesn't seem to be a concern, even with a crappy home internet connection.
Even in the unlikely event that all concurrent users change their status every second, the upload bandwidth is just 20MB/s, which is not impossible
One question that came to my mind. what happens when we send a message to an user who is offline. Is it possible to store this message in some distributed event bus, or everything will be stored locally on the phone in sql database or preffered settings and when user is online , there will be initiatiated a peer to peer connection between our mobile phone and the user and everything will be pushed
I would store the message in the server side database. When the receiving user comes online, I will query the database for any pending offline messages and deliver them to the user immediately
http://shirleyisnotageek.blogspot.com/2015/03/design-chat-server.html
class
User{
String username;
String display_name;
List<user> contactList;
List<connectrequest> requests;
StatusType currentStatus = StatusType.offline;
String status;
private
Server belongsTo;
private
Map<user list=
""
tring=
""
>> messageBox;
public
User(Server belongsTo, String userName){
this
.belongsTo = belongsTo;
username = userName;
contactList =
new
ArrayList<user> ();
requests =
new
ArrayList<connectrequest> ();
messageBox =
new
HashMap<user list=
""
tring=
""
>> ();
}
boolean updateStatus(String message){
status = message;
}
boolean sendRequest(String userName){
if
(!belongsTo.userExists(userName)){
System.
out
.println(
"No such user!"
);
return
false
;
}
User toAdd = belongsTo.getUser(userName);
ConnectRequest send =
new
ConnectRequest(
this
, toAdd);
toAdd.requests.add(send);
return
true
;
}
boolean approveRequest(Request r){
//user deactivates the account before the request is approved
if
(!belongsTo.contains(r.receiver){
System.
out
.println(
"No such user!"
);
return
false
;
}
User toAdd = belongsTo.
get
(r.receiver);
contactList.add(toAdd);
requests.remove(r);
return
true
;
}
boolean denyRequest(Request r){
requests.remove(r);
return
true
;
}
void
removeContact(String userName){
User toRemove = belongsTo.
get
(userName);
contactList.remove(toRemove);
}
boolean sendMessage(String username, String message){
User toBuzz = belongsTo.
get
(username);
if
(!contactList.contains(toBuzz)){
System.
out
.println(
"The user is not in your contact list!"
);
return
false
;
}
if
(toBuzz.status == StatusType.offline){
System.
out
.println(
"User is currently offline!"
);
return
false
;
}
if
(!messageBox.contains(toBuzz)){
messageBox.put(toBuzz,
new
ArrayList<
string
>());
toBuzz.messageBox.put(
this
,
new
ArrayList<
string
>());
}
messageBox.
get
(toBuzz).add(message);
toBuzz.messageBox.
get
(
this
).add(message);
return
true
;
}
void
setStatus(StatusType s){
currentStatus = s;
}
}
class
ConnectRequest{
User sender;
User receiver;
}
class
Server{
List<user> dataBase;
boolean userExists(String userName){
for
(User u : dataBase){
if
(u.userName.equals(userName))
return
true
;
}
return
false
;
}
void
registration(String userName){
if
(userExists(userName){
System.
out
.println(
"user name exists, please change for another one"
);
}
dataBase.add(
new
User(
this
, userName));
}
void
deactivate(String userName){
dataBase.remove(getUser(userName));
}
User getUser(String userName){
for
(User u : dataBase){
if
(u.username = userName)
return
u;
}
return
null
;
}
}
http://buttercola.blogspot.com/2014/11/facebook-design-chat.html
online or goes offline has a worst case cost of O(average friendlist size * peak users * churn rate)
messages/second, where churn rate is the frequency with which users come online and go offline,
in events/second. This is wildly inefficient to the point of being untenable, given that the average
number of friends per user is measured in the hundreds, and the number of concurrent users
during peak site usage is on the order of several millions.
Surfacing connected users' idleness greatly enhances the chat user experience but further
compounds the problem of keeping presence information up-to-date. Each Facebook Chat user
now needs to be notified whenever one of his/her friends
(a) takes an action such as sending a chat message or loads a Facebook page
(if tracking idleness via a last-active timestamp) or
(b) transitions between idleness states (if representing idleness as a state machine with states like
"idle-for-1-minute", "idle-for-2-minutes", "idle-for-5-minutes", "idle-for-10-minutes", etc.).
Note that approach (a) changes the sending a chat message / loading a Facebook page
from a one-to-one communication into a multicast to all online friends, while approach (b)
ensures that users who are neither chatting nor browsing Facebook are nonetheless generating
server load.
Fault tolerance is a desirable characteristic of any big system: if an error happens, the system should
try its best to recover without human intervention before giving up and informing the user.
The results of inevitable programming bugs, hardware failures, et al., should be hidden from the user
as much as possible and isolated from the rest of the system.
While this architecture works pretty well in general, it isn't as successful in a chat application
due to the high volume of long-lived requests, the non-relational nature of the data involved, and
the statefulness of each request.
For Facebook Chat, we rolled our own subsystem for logging chat messages (in C++) as well as
an epoll-driven web server (in Erlang) that holds online users' conversations in-memory and
serves the long-polled HTTP requests. Both subsystems are clustered and partitioned for reliability
and efficient failover. Why Erlang? In short, because the problem domain fits Erlang like a glove.
Erlang is a functional concurrency-oriented language with extremely low-weight user-space
"processes", share-nothing message-passing semantics, built-in distribution, and a "crash and recover"
philosophy proven by two decades of deployment on large soft-realtime production systems.
calls (marshalling arguments and responses over the wire) and has templates for servers and clients.
Having Thrift available freed us to split up the problem of building a chat system and use the best
available tool to approach each sub-problem.
http://systemdesigns.blogspot.com/2015/12/facebook-chat-messeger.htmlFunction 1: Real-time presence notification:
The most resource-intensive operation performed in a chat system is not sending messages.
It is rather keeping each online user aware of the online-idle-offline states of their friends,
so that conversations can begin.
The naive implementation of sending a notification to all friends whenever a user comesIt is rather keeping each online user aware of the online-idle-offline states of their friends,
so that conversations can begin.
online or goes offline has a worst case cost of O(average friendlist size * peak users * churn rate)
messages/second, where churn rate is the frequency with which users come online and go offline,
in events/second. This is wildly inefficient to the point of being untenable, given that the average
number of friends per user is measured in the hundreds, and the number of concurrent users
during peak site usage is on the order of several millions.
Surfacing connected users' idleness greatly enhances the chat user experience but further
compounds the problem of keeping presence information up-to-date. Each Facebook Chat user
now needs to be notified whenever one of his/her friends
(a) takes an action such as sending a chat message or loads a Facebook page
(if tracking idleness via a last-active timestamp) or
(b) transitions between idleness states (if representing idleness as a state machine with states like
"idle-for-1-minute", "idle-for-2-minutes", "idle-for-5-minutes", "idle-for-10-minutes", etc.).
Note that approach (a) changes the sending a chat message / loading a Facebook page
from a one-to-one communication into a multicast to all online friends, while approach (b)
ensures that users who are neither chatting nor browsing Facebook are nonetheless generating
server load.
Real-time messaging:
Distribution, Isolation, and Failover:
try its best to recover without human intervention before giving up and informing the user.
The results of inevitable programming bugs, hardware failures, et al., should be hidden from the user
as much as possible and isolated from the rest of the system.
While this architecture works pretty well in general, it isn't as successful in a chat application
due to the high volume of long-lived requests, the non-relational nature of the data involved, and
the statefulness of each request.
For Facebook Chat, we rolled our own subsystem for logging chat messages (in C++) as well as
an epoll-driven web server (in Erlang) that holds online users' conversations in-memory and
serves the long-polled HTTP requests. Both subsystems are clustered and partitioned for reliability
and efficient failover. Why Erlang? In short, because the problem domain fits Erlang like a glove.
Erlang is a functional concurrency-oriented language with extremely low-weight user-space
"processes", share-nothing message-passing semantics, built-in distribution, and a "crash and recover"
philosophy proven by two decades of deployment on large soft-realtime production systems.
Glueing with Thrift:
Thrift translates a service description into the RPC glue code necessary for making cross-languagecalls (marshalling arguments and responses over the wire) and has templates for servers and clients.
Having Thrift available freed us to split up the problem of building a chat system and use the best
available tool to approach each sub-problem.
(1)讨论scenario
一个基本的chat实现,要从用户上线;可以查询buddy list;找到在线好友;进行聊天;结束聊天以及查询上次聊天记录。
基本scenario:用户状态显示;实时准确地收发信息;历史记录;再加一个Buddy List。
(2)提出框架
将功能抽象一下的话,可以用最基本的MVC来概括设计方向:
View(UI)负责用户交流;Controller负责channel建立以及信息传递;Model部分是数据,用来存放各种用户信息。
(3)具体例子实现可以参考
比较典型的面试题型,用上OOD的知识,设计最简单的chat 服务。
针对不同的需求产生不同object,然后针对不同object进行不同的存储办法。参考九章算法视频。这个图是九章算法对于chat server数据部分设计的截图(不讲这个。)
具体会遇到的实际问题:掉线;心跳;长连接(Http connection)
2. FaceBook Chat
这部分是我们今天的重点,我想和大家分享Facebook的chat是如何从1.中的简单程序,进化到面向当时70 million用户的。
通过1.中的Scenario分析,Facebook和我们一样,明确了要实现的基础功能:
· 信息实时收发(Channel?);
· 有聊天记录(在用户切换Facebook界面是,聊天记录还在);
· 显示用户在线/离线状态;
(3)Necessary: constrain/hypothesis(定义限定条件,如用户量\访问量)
对于一个Facebook用户,我们假设他有多少好友?就200人吧。
Yang: 当这个用户上线用chat时,我们需要显示他所有好友的状态
也就是我们要query一次数据库,拿到他所有好友的状态信息;
Shirley: 嗯 对于某个人的status,需要对所有他的friends,以及正在参与的chat session里的friends都要实时显示呢
Yang pachi:那么对于所有的Facebook用户,每个人使用chat的时候,都会产生这样的query request。
这是Facebook的第一个难题:Real-time presence notification。这个问题虽然简单,但是数量级却非常的大。
复杂度是:O(average friendlist size * peak users * churn rate) messages/second, where churn rateis the frequency with which users come online and go offline, in events/second
@Mr.C 说,把user和状态绑到一起,这样会有一个效率上的问题:我们只想知道这个人的状态是ON/OFF,但是我们却需要先得到一堆user objects?然后再从中找出状态。
总结
在涉及到数量方面,Presence就是一个社交类程序从小量用户scale到大规模用户时最容易出现的挑战:简单基本却不可避免的操作会对系统造成影响(状态查询)。和这个类似的,可以参考QQ从百万到亿级过程中,用户名结构的改变与否对系统设计的巨大影响。
- QQ 架构的演变 - samuelchoi – 博客园
http://www.cnblogs.com/samuelchoi/archive/2012/03/31/2427424.html?from=timeline&isappinstalled=0]
http://www.cnblogs.com/samuelchoi/archive/2012/03/31/2427424.html?from=timeline&isappinstalled=0]
(4)chat logger问题
这个是和facebook的特性有关,需要在user切换不同page时,保持当前的聊天记录。
(5)chat稳定性问题
这个环节,综合 Application: service/algorithm(将应用拆解成满足各种需求的服务);以及Kilobit: data(各种服务产生的数据存储)
“将应用拆解成满足各种需求的服务” 和视频中的一句话思路上异曲同工:“Discrete responsibilities for each service” 。
看完视频我觉得Facebook的设计思想是,如果有某一种service是异常需要的,那么就先建好,然后再用工具loose coupling,将不同service整合到一起。这样可以针对不同的service选用不同的语言、工具,各个部分达到最好的效率。
图里面的 Thrift 是Facebook的胶水语言,用它将各个模块结合起来。因为有 Thrift在,所以工程师们可以针对不同的特性,对上面的这几个不同server,进行“个性化”的optimization。
(1)Presence Sever
用的是C++,然后咱们现在问题是,它可能是个是么样的data structure?
大量的读操作,要快速,还要稳定啥的…… 有想法么
写的话,最好可以快速大量写
hash?
hash暴力么?
有没有更简单的结构
Large Aray, Indexed on user id then use C++ aggregates online info in memory.
"Each cluster of channel servers keeps an array to record which users are available to chat. Building a list of a user's online friends might require knowledge from each of the many channel clusters, so we instead publish the channel servers' collective knowledge to one place — the presence servers — so that building a list of online friends requires only one query."
“全在memory放不下吧”这是system design里经常出现的问题
关于Presence Server,它的结构是array,那么它如何预防single point failure呢?
这里Facebook的解决方案是Replicated not partitioned
因为数据结构简单,而且identical machines的话,一个down掉了,另一个直接拿来就可以用,快速便捷。
(2)Channel Server的具体设计
在这个问题上,需要回到chat的主要功能去:信息的收发
考虑到Facebook用户的使用情况,有一部分人会打开chat之后立即就去即时聊天;有一部分人仅仅只是登陆,打开聊天窗口,什么也不发送(给女神发信息反复删改么[Chuckle]),这些不同的情况,归结起来,就是
concurrent long opened request
“The channel servers are the most intricate piece of the backend. They're responsible for queuing a given user's messages and pushing them to their web browser via HTTP.”
然后这里的知识,我就不是特别了解了,Facebook考虑到上述特性,直接就选用了Erlang这个语言来开发这个channel serve。并且结合前端技术,用Comet这个模型里面的Ajax long polling + Erlang来实现Chat 的对话模型。
Comet is a web application model in which a long-held HTTP request allows a web server to push data to a browser, without the browser explicitly requesting it.
选择erlang的原因,简单说来就是:Erlang is good at holding a ton of open connections, which is the hardest part of a chat service. (详情见:Why was Erlang chosen for use in Facebook chat? https://www.quora.com/Why-was-Erlang-chosen-for-use-in-Facebook-chat)
这样达到的效果就是,每个用户使用的channel会和前端一直保持通信;与此同时Channel定期向Presence Server 更新用户在线信息,然后有信息到来的时候,向用户push。
实现了ChannelServer后,同样要考虑它的安全和稳定性;Facebook用了传统的Distributed Design的方法。大家可以wiki这个概念,比较常用。
(3)Chat Logger
用C++实现;通过Distributed Design 来避免single point failure
(4)Web Tier
chat的控制部分,同时是和Facebook其他功能的接口,用PHP实现。
小结:这部分对上面Facebook的系统图,稍微深入地探讨了一下。主要对Presence Server和Channel Server的设计,以及如何增加稳定性做了介绍。其中 Distributed Design 还有 replicate,Partition,这三个概念需要大家查询掌握,我觉得这三个可以放在各种系统设计里面。
4. Evolve(根据面试官的需求升级系统)
面试官可能会问你,还有什么可以提升改善的?(当然有很多了)
(1)硬件上进行备份防止single-point failure(不同Server特点选择不同备份方法)
(2)在performance上的提升:这里就会涉及具体算法和实现;
a. 像在chat logger那里,会用到 ”Last K messages“这样的query,有没有算法可以实现这样的需求,可以考虑一下;
b. 像@松岩的问题,在各个模块通信之间是什么样的,有没有可以提升的地方;简单的,我们可以选择 压缩、不压缩来改善传递信息的质量,(会考到不同的压缩算法)但是这是在用CPU的时间来交换
5. Facebook fun fact
Dark Launch:就是当chat开发好之后,如何保证可以稳定的发布呢?做测试,可是万一无法全部cover test cases呢?就放到实际中去做测试——于是乎,在功能正式发布前头几周,用户们其实已经有了chat这个功能了,只是UI部分被工程师们隐藏了起来,并设置了自动发送信息的功能。这样在dark的情况下稳定工作之后,Facebook turned on the light 然后正式发布。
与此类似的,在Trip advisor工作的同学和我说,他们的产品开发好上线之后,并不是所有用户都会update到同样的version;不同的用户的app可能更新后会有不同的版本和功能哦,这也是为了测试和稳定。
- Long-running concurrent requests:/ long opened request: http://symcbean.blogspot.com/2010/02/php-and-long-running-processes.html
Apache:
Channel:
Partition:
Dark Launch/test
QQ scale
the model of chat server
the scale of Facebook chat
Vanilla Web
Distributed Design
Replicated not partitioned
distributed hash table
http://www.cnblogs.com/samuelchoi/archive/2012/03/31/2427424.html
不仅IM业务,腾讯公司在很多不同业务上都走过一些弯路,积累了相应的经验,边重构边生活、大系统做小、先扛住再优化、干干净净......这些正是在不断的试错和总结中得出的理念和价值观,是在技术演化的过程中得出的启示。
We certainly need a user class. We need to handle concurrency. That is the hardest problem.
public
class
MyUser {
private
long
ID;
private
String username;
private
String password;
public
void
speak() {
}
}
public
class
MySession {
Set users;
public
void
handleEvent() {
// hitting 'Enter' by any user
// will trigger an event.
// We need to handle them in turns
// Need a queue here, perhaps
}
public
void
display(MyUser user) {
// display user's chat
}
}
public
class
MyChatServer {
MySession currentSession;
Set registeredUsers;
public
void
register() {
// register a user
}
public
void
login() {
// ask for username and password,
// retrieve the user
MyUser user =
null
;
currentSession.users.add(user);
}
public
void
run() {
}
}
- User A signs online
- User A asks for their contact list, with each person’s current status
- Friends of User A now see User A as online
- User A adds User B to contact list
- User A sends text-based message to User B
- User A changes status message and/or status type
- User A removes User B
- User A signs offline
聊天室
也是一样,我就先做做思考,希望能够通过讨论来深化这个系统设计题目和不同设计的对比。
为了能够支持2-1000个用户同时上线聊天,比如youtube上的同时观看直播视频时候的chat,我们有下面的components:
- client app: 浏览器,和服务器间使用使用websocket。
- load balancer: 代表client和相应的chat server,也用websocket联系起来。
- chat server:每个chat server连接多个load balancer的socket connections。有多个chat rooms的objects。在一个聊天室里聊天的sockets会连上其所对应的chat room的对象。
- 储存媒体。这个没想好。relational database 太慢。Kafka每个chat room一个topic是不是太大材小用? 也许用Mongo?
app有一定的记忆,每次建立connection的时候,需要报告自己最后的消息ID(消息ID由server颁发,是个严格上升的sequential number)。这样server会返回从那个消息ID以后你没有收到的messages。load balancer 通过consistent hashing来找到chat server,并且建立联系。
chat server如果新建,那么需要从关系型数据库里得到储存媒体的信息,要么建立新的文档,要么读取过去的文档
chat server如果发现所有的connection都断掉了。说明无人在线,或者是因为chat server自己掉线,那么就应该果断关闭自己的chat room。这样就避免了多个进程同时修改储存媒体的可能。
chat server应该偶尔向每个用户发送 你还在吗?信息。如果无法送达,那么就mark该用户掉线。
当用户发送一条消息的时候,chat server会把该消息写入储存媒体,并且向其它在线用户逐个发布这条新的消息。附加的是该消息的ID。
有些从用户发送出得信息是不需要储存的。比如“马冬梅正在敲字”,这个信息需要向其它在线用户发布但是不需要写入储存媒体