enciv / undebate Goto Github PK
View Code? Open in Web Editor NEWNot debates, but recorded online video Q&A with candidates so voters can quickly can get to know them, for every candidate, for every election, across the US.
License: Other
Not debates, but recorded online video Q&A with candidates so voters can quickly can get to know them, for every candidate, for every election, across the US.
License: Other
take the design from #18 and implement it in react. The design is here:
https://www.figma.com/file/jtIoqpnhGfBEAHSVLAIVf4/undebate_UI?node-id=0%3A1
focus specifically on this layout: https://www.figma.com/file/jtIoqpnhGfBEAHSVLAIVf4/undebate_UI?node-id=61%3A631
Except:
Do not implement the "^" (fold) feature to the right of "Introduction"
And the ">" feature at the right of the line of videos is optional since the videos will rotate.
we will get more detail on how the layout should be in portrait mode soon.
Take the copy the undebate.jsx file into candidate-conversation.jsx and implement it there, and call the component CandidateConversation
To test it, you will have to edit the database, and add a new Iota record. To add the record, go to your heroku account, and click on this app. Then you will see a link for mLab MongoDB, click on that. Then you will see a new tab with a list of collections, click on iota. Then you will see a [+ Add document] button. Click on that. Then you will see an empty record. Then delete the "{}" that's there and paste in this text:
{
"path": "/candidate-conversation",
"subject": "School Board Candidate Conversation - Candidate Conversation",
"description": "A prototype Candidate Conversation for schoolboard",
"component": {
"component": "MergeParticipants"
},
"webComponent": {
"webComponent": "CandidateConversation",
"opening": {
"line1": "You are about to experience a new kind of conversation",
"line2": "This is how voters can learn about candidates in a more human way",
"line3": "And this is how we can efficiently facilitate 500K conversations all over the country, every election season",
"line4": "The topic of the discussion is:",
"bigLine": "US School Board Candidate Conversation",
"subLine": "This is a mock conversation, these are not real candidates"
},
"audio": {
"intro": {
"url": "https://res.cloudinary.com/hf6mryjpf/video/upload/v1567095028/Generic_Light_Intro_7_sec_yp3lxk.wav",
"volume": 0.8
},
"ending": {
"url": "https://res.cloudinary.com/hf6mryjpf/video/upload/v1567096104/Heart_Soul_Entire_Duet_2_min_45_sec_1_j5sbpj.wav",
"volume": 0.7
}
},
"participants": {
"moderator": {
"name": "David Fridley",
"speaking": [
"https://res.cloudinary.com/hf6mryjpf/video/upload/v1566788682/candidate-conversation-moderator-0_at5un1.mp4",
"https://res.cloudinary.com/hf6mryjpf/video/upload/v1566788667/candidate-converation-moderator-1_z2kjhr.mp4",
"https://res.cloudinary.com/hf6mryjpf/video/upload/v1566788659/candidate-confersation-moderator-2_cid3dq.mp4",
"https://res.cloudinary.com/hf6mryjpf/video/upload/v1566788634/candidate-conversation-moderator-3_iq0npa.mp4"
],
"listening": "https://res.cloudinary.com/hf6mryjpf/video/upload/v1566788719/candidate-conversation-moderator-listening_nlfeoy.mp4",
"agenda": [
[
"Introductions",
"1- Who you are",
"2- Where you are",
"3- One word to describe yourself",
"4- What office you are running for"
],
[
"What type of skills should students be learning for success in the 21st century?"
],
[
"Closing Remarks"
]
],
"timeLimits": [
10,
60,
60
]
}
}
}
}
This record is the same as the one for localhost://schoolboard-conversation except that:
you'll be able to access this at localhost://candidate-conversation
We want to be able to A/B test different questions. The challenge is we have to get the candidates to answer all the questions, and then we can play A or B to the users, but we need a good way to measure user reaction to A or B.
Implement a new UI for the candidate recorder.
Start with app/components/web-components/undebate.jsx and copy it into a new file - called cc-recorder-self
Here is a Figma clickable promockup - https://www.figma.com/proto/jtIoqpnhGfBEAHSVLAIVf4/undebate_UI?node-id=277%3A1&scaling=min-zoom
Here is the link to the figma:
https://www.figma.com/file/jtIoqpnhGfBEAHSVLAIVf4/undebate_UI?node-id=320%3A257
Here is a video walk though:
https://drive.google.com/file/d/1WYPkGWC0ZwnbUQIpAZgxwxz_VG-i2Ns1/view?usp=drive_web
This is related to but different than #78
The self directed UI is different from the directed UI in that the user is in control of taking each next step.
We want to implement both in order test both with users and collect feedback and see where they go.
Sometimes the stall window flashes quckly - this is when data is coming slowly. We can improve the stall screen so that if it comes on, it stays on for at least one or two seconds (and pauses playback during that time) to avoid the flashing effect. May be related to #2
Provide collateral for the HackforLA website
See project cards on the hackforla.org website for examples
By adding the project's logo/image to your project's primary repository, we will be able to dynamically deliver up to date information about your project to the hackforla.org website. Also when people add the link to the repository in LinkedIn or Slack, or other social media it will automatically use the image as well as the description, improving the link's chances of getting clicked on.
Add project's logo/image to your primary Github repository using the instructions below. You should use the same image as is on the hackforla.org website, or if another image is desired, please replace both with the same image.
Tip: Your image should be a PNG, JPG, or GIF file under 1 MB in size. For the best quality rendering, we recommend keeping the image at 640 by 320 pixels.
Read Github's Customizing your repository's social media preview
The viewer has an intro page, where the user presses begin. Eliminate the page so that the candidates are shown initially - but you sill have to have the begin button.
in app/components/web-components/undebate.jsx
beginOverlay=()=> renders an overlay that appears, and then the user clicks Begin and components move out of the way and the video windows move into the screen.
We want to disable this so that the video windows are all shown, and the user has to hit the play button to get the videos to start. Don't take out the intro code, just add the new feature and assume a new this.props.enableIntro would enable it.
It's fully agreed that this file is a mess and code like this needs to be broken apart and refactored. I'm eager to discuss ways, but that's a bigger job.
For the January launch of this with our partner, we want to make a few simple UI changes:
1) Put the title of the election (like San Francisco District Attorney) at the top of the screen.
2) Make the next up window the same size as all the other windows, - and re-position the windows on the screen accordingly.
3) Move the EnCiv Logo up to the top left of the screen from the bottom - removed: and add the ability to navigate to an about page.__
4) the initial link (for example https://undebate.herokuapp.com/san-francisco-district-attorney should land on the screen with all the candidate windows showing - skipping the intro page with the coffee cups.
Create a copy of undebate.jsx and call it reundebatestate.jsx or something like that.
Pull out the state, including seatOffset, round, finishUp, done, begin, intro.
You should leave seatStyle and for the rendering app for now.
Use redux
I had to do a unit test. I put it in app/tests. It's just a stand alone file that runs and exits, returning the number of errors. I know there are better environments for this - but we need to set it up and document how to.
So the issue is to decide on a unit test environment, set it up, and convert the one unit test into that format.
Consider that we should do a lot of unit tests of react components.
Sometimes we have to test how the react component interacts with the browser - like react-camera-recorder. This may require something like storybook and that a human actually exectute these steps, but we need to layout the steps. So it may be separate from automated unit tests - but we have to have a plan.
The web app code is generated at compile time and stored in dist/client. At the end of the browser load of the html page, it calls out this file. Lets gzip it for faster transfer. look in postinstall.sh for the scripts that do all the build work.
When playing a conversation on an iPhone, adolf and david have both seen an issue where the stalled (grey) window pops up but doesn't go way. If you reload, or play with the next/back buttons the problem dissapears. After the first time, this problem doesn't happen again (probably because the first part of the file is cached).
We want to alphabetize the candidates. That means that the first person would always start out in the larger next up window which might bias things. So we want to shrink that window.
If you go to undebate.herokuapp.com/schoolboard-conversation on a desktop, you'll see that you can change the size of the window, and the display will re-arrange based on the new dimensions. Our objective is to always have everything on the screen - no scrolling. So the positions for thing are calculated based on the screen size.
This change needs to be done in the 3 different layouts that are chosen based on resolution, and in the initial state that is rendered server side before the screen size is known.
The file is app/components/web-components/undebate.jsx
in constructor() near the end, state={....seatStyle: { ...., nextUp: { left: xxx, top: xxx, width xxx}
then introSeatStyle: {left: xxx this needs to position it off the screen
then in calculatePositionAndStyle()
there are 3 cases where these variables need to be set: width/height > 0.8, width/height>1.8 and portrait mode.
nextUpWidthRatio,
seatStyle.nextUp.left
seatStyle.nextUp.top
seatStyle.nextUp.width
introSeatStyle.nextUp
changing the width is easy, but changing the left and top, and possible having to move other windows around to make it look good will be the work.
Constituents would go here to voice there concerns. issues, etc - after they voiced theirs, they would see 10 others' concerns and pick the most important 3 for the community to hear. Depending how how many people, there would be multiple round or reviewing the top 10 concerns, until there were about 10 - and then the elected representatives would be able to record a response to the 10 - which would be available for everyone to see.
For users with admin privleges (also not implemented) - be able to view a list of the debate links, and their titles and descriptions - something that is being managed in a spread sheet right now.
This project imports "string" in
string has been tagged by github has having high severity vulnerabilities and no fix, and there doesn't seem to be any development on it. So we need to replace it.
Figure out the functions of "string" that are being used, and find a replacement. in npm.
When a 'bot' type browser access an undebate link, we should render the React in to an image, and serve back a simple HTML file with that image in it. The image should look like the page, with an image of the video from each candidate in it. There are two issues here, first is how to render react into an image, the second is adjusting the undebate.jsx so that it's getting images of the candidates rather than video.
When someone is watching an undebate - we should offer them a share button.
Currently the majority of the operation of a candidate-conversation is in the undebate module - everything from the splash screen through the thank you and feedback panel is handled in it, in code. If we wanted to create another structure - such as the town hall, we would have to create another module like undebate, but different.
We would be able to create new discussion structures faster if we componentized undebate so that an undebate were a series of modules, and what series to use is determined by the parameters set in the database.
Add the election name (from the subject: field) to the top of the page. Make it format well on all device types.
The file is app/components/web-components/undebate.jsx
there will be a prop called subject
add it to either the function main() or to what render returns.
Then adjusting the placement of the windows (seats), agenda, and buttons. in calculatePositionAndStyle()
Note that initial values are set when state is initialized, and then recalculated when the viewport size is known based on the viewport proportions.
See also #25, it would be userful to make both changes at the same time.
We don't currently block recording from a smartphone, and in some cases it may work - but there are issues:
Government agencies require Americans with Disabilities Act compliance. What need research into what do we need to do to comply, and how do we test it after we have implemented it. Also, is there a way to engage with those who have disabilities in order to directly collect their needs and address them, and then engage them for actual user testing.
The output of this task should be a document that references HTML and/or React requirements for ADA compliance, tools for testing, resources for help/assistance.
Create a landing page for candidates that are referred to this site, to record their conversations. The page should take their email from iota and send the candidate an email. When the candidate clicks on the link in the email - the candidate can continue.
Note the candidate could click on the link on the computer they are using - in that case it should open up a window to the next step of what the candidate should do.
But the candiate could also click on the link in an email on another device - like their phone. In that case - the state of the page that initiated the email should advance to the next state.
The next state should be the candidate recorder.
Create a prototype undebate that users Vimeo so we can discover what implementation issues we would have.
If you look at app/components/web-components/undebate.jsx you can see how a "youtube" prototype was worked in. You could probably follow that patter to create a vimeo prototype.
For slow links:
reduce the size of the .js file by using gzip, maybe post it on a cdn.
reduce the splash image resolutions and quality (especially the marble tabletop).
Alphabetize the candidates in the viewer. by last name. But we need to figure out which is the last name in the name field, or get the names separately.
app/components/data-components/merge-participants.js is where the items are pulled out of the mongo db.
assume a new field undebate.webComponent.alphabetize that indicates that the records should be sorted by last name. Challenge is there is only on field - participant.name so we have to figure out which name is the lasts name (and not Jr. or Sr. or III, etc). One way, would be to create a list of possible endings, and if there are 3 words in the name, take the last one unless it matches on in the list.
Some people are really not happy about this UI design. It's been enough to get the project to this point, but we really need a more generally attractive UI design.
Here is an example with all the candidates slots filled: https://undebate.herokuapp.com/schoolboard-conversation
Here is an example with only 3 candidate slots filled: https://undebate.herokuapp.com/san-francisco-district-attorney
One way that people get to this is by clicking on a link at ballotpedia: https://ballotpedia.org/Chesa_Boudin
Other ways are through links in email campaigns, and facebook/twitter links.
We are open to anything, and would love to do A/B testing on different designs to figure out what works. One word about the standard business practice of designing for the demographics of your target market - that's great for business but that's the opposite of what's needed for democratic discourse. We need to design the inclusive user interface, the unbiased user interface.
For a new UI design, we don't need it in any particular format. It can be pencil on paper, power point (my favorite), or any of the many UI prototyping tools. Whatever is fastest for you, that gets the idea across. We do need help with specifically what colors, shapes, fonts, and such to use.
Constraints:
We do need the UI design to convey how speakers change. In the current design you see that the windows rotate to move the next person into the center window. It's doesn't have to be like this, but we need a description of how speakers change, and we need the design to consider that change the speaker is not seamless, there may be flashes and discontinuities when we change video streams. (that's why we are moving things around). But they don't have to rotate. Also, we can put a still image on top momentarily while the video is loading underneath.
We have the preference to having all the candidates on screen in some way. This matches how you see the race on ballotpedia's page, and if you were in the audience at a debate you would see all the candidates on stage - though only one might show up in the big display behind the candidates. But if you want to challenge that preference with A/B testing - go for it.
We think it needs to all be above the fold - like on a television. We don't want people to have to scroll down while someone is talking to see something else. So you need to think about how it's going to be on a desktop, but also how it's going to be on a smarphone in both landscape and portrait mode.
the aspect ratio (width to height) of the videos shouldn't be changed, but this app will get run on difference viewports and smartphones with very different aspect ratios.
The browser won't let us play back video with audio until after the user presses a button. That's why we have the begin button. But we can play back silent video before that, or we can load images.
The current design can support from 2 candidates to 7 candidates - but they all use the same format - so there's blank space when there are only 2 candidates. It may be better to have different layout's for different numbers of candidates. And sometime there are more than 7 candidates.
As for user feedback, I've heard people say it's too formal, they don't like the way things move on the screen. I've heard that we need to include more content on the page like the Election and Race.
I encourage talking to the people around you, everyone is a target user of this.
Here is the viewer feedback we have received:
We don't currently block recording from a smartphone, and in some cases it may work - but there are issues:
This was a great article about securing Node applications. https://medium.com/@rajapradhan08/best-practices-for-securing-node-js-web-applications-2e54cfefdc05
What of these should be applied to this project. If something is big and should be applied, create separate issues for them.
Currently this is one node application that fetches data from a mongoDB. As we expand to more and more elections, we are going to need to spread out onto multiple servers. How should we do that? This will need to be studied and probably broken down into multiple tasks.
Also - how do we do load testing.
Refering to this in phigma: https://www.figma.com/file/jtIoqpnhGfBEAHSVLAIVf4/undebate_UI?node-id=363%3A425
And to this image:
In figma the template is layed out as 1920x1080. If you click on an element (like the title) it will show dimensions and details in pixels. We need to convert those dimensions into vw or vh or rem as much as possible. -- (except in calculate style where we are doing everything in px so that we can easially add vw's and rem's).
We looked at the font sizes and decided that 35px should be 1rem, 45px shold be 1.25rem, and 30px should be 0.85rem. There is a place where the fontsize is calculated and set in the HTML element.
A- The size of the blue box should be calculated in vw and vh, and rendered as a inline-block (not a border which is taller than the text.
B. The yellow box size should be calculated in vw and vh and rendered as an inline-block.
C. FontSize of the date should be normal.1rem
D. The video window should be taller and wider, and the text title should overlay it. Also make the black background 50% transparent. The style.left calculations for the seats will need to be updated. Check and use the font-family, size, weight, as described in figma
E. "Agenda" font family/size/weight also check on the height of the box.
F: Font family/size/weight
G: Font family/size/weight
H: We should change the begin button to a bigger version of the Play button, and make the black background 50% transparent. To do that - we will have to take the file in the svgr directory and edit it. Make an icons directory in app/components and put it there.
I: make all the button icons 50% transparent - put them in the new icons directory. (The idea of the svgr directory is going to have to go away because we will have to make tweaks to the files every time.) Unless you can think of a better way.
J: If there are fewer candidates than will fill the screen, center the row - so there is an equal amount of space on the left and right. Depending on the aspect ration of the display, in some cases their may be white space to the right of Agenda. In this case the Speaker+Agenda row should be centered similar to the candidate row when there are only a few candidates.
K: In this image,
If there are more candidates than will fit on the screen, add this box showing the icon, the number of additional candidates, and the right chevron. If someone clicks on the box- scroll the candidates to the left by half the screen width. After scrolling, show a similar box on the left, showing the number of candidates that are offscreen to the left. You could accomplish the scroll bay creating a candidateScrollLeft state variable, and adding it to the left seats when you calculateStyle.
L: In portrait mode, add a second row of candidates - if necessary - as in https://www.figma.com/file/jtIoqpnhGfBEAHSVLAIVf4/undebate_UI?node-id=172%3A132
Notice in landscape mode there is a calculation:
let calcHeight=navBarHeight+vGap+width*seatWidthRatio*HDRatio+titleHeight+vGap+width*speakingWidthRatio*HDRatio+titleHeight+vGap;
if(calcHeight>height){ // if the window is really wide - squish the video height so it still fits
let heightForVideo=height-navBarHeight-vGap-titleHeight-vGap-titleHeight-vGap;
let calcHeightForVideo=width*seatWidthRatio*HDRatio+width*speakingWidthRatio*HDRatio;
seatWidthRatio=seatWidthRatio*heightForVideo/calcHeightForVideo;
speakingWidthRatio=speakingWidthRatio*heightForVideo/calcHeightForVideo;
}
Make a similar calculation for portrait mode to make sure it all fits on the screen. You'll also have to center the speaker and the agenda if you do this.
For a users with the privileged, give them a way to build undebates.
What's needed:
List of questions to ask. (typically 3).
The maximum time to allot for each question.
An organization name - that's holding the undebate
The title, and description.
This must turn into a path that's unique cc.enciv.org/org:/title:<title>
Lead the user through recording the moderator, and viewer video segments, or allow the user to send a link to someone else to record the segments.
Allow the user to send a link to a special version of the candidate-recorder, the recordings from which will be used as the example speaker.
Allow the user to review the segments.
Give the user a link to send to candidates.
Give the user a link to send to voters.
(How a user gets the login, and privileged to do this is outside the scope. For now, we will assign the privileged manually in the database. To get an account, go to /schoolboard-candidate-recorder and make a recording and create an account. )
See https://cc.enciv.org/moderator-maker-recorder for a crude example of leading someone through recording the segments.
Implementing this feature will have 3 components, UI design, React Front End, Node/Mongo backend. If you want to work on this, we can figure out the steps and create separate issues as needed.
Attached is a zip file of .svg files that we can use for the control buttons on undebate.jsx instead of the words "prev speaker, prev section, etc".
[icons-1.zip](https://github.com/EnCiv/undebate/files/3975862/icons-1.zip
Probably, the svg files should go in /assets/svgs or /assets/icons
Question: do we need to convert these to icons in some way or do we just use them.
Is there a way to combine them into one file to avoid loading having the browser load them one by one.
We would like a new UI for the candidate recorder. Currently it is the same code as the viewer, but it doesn't need to be.
To see the candidate recorder go to undebate.herokuapp.com/schoolboard-conversation-candidate-recorder - or [your url]/schoolboard-conversation-candidate-recorder
One of our biggest challenges is getting candidates to record their videos. So how can we make candidates comfortable with doing this as we introduce it? We also believe that it's the candidate's campaign manager that first looks at the link and evaluates the opportunity and whether and how the candidate does it. We also had feedback that we need to give candidates more notice - we were talking to them 2 weeks or less before the election. For the next election we will reach out 3 to 4 weeks in advance.
We had feedback about being able to edit the videos when you are recording them, we have implemented the Redo button, and the Prev Speaker button will now playback the video that was just recorded. But we could use usability testing of this.
We have viewer feedback that they would like the candidates to arrange the cameras at eye level so they aren't looking down into the camera. We See this video:https://undebate.herokuapp.com/san-francisco-district-attorney. We also had feedback asking that their be good lighting. Our original concept was that this would be a casual conversation, and many of us developing this are frequent uses of online video, and the weird camera angles is something we don't notice anymore. Whether we should try to direct the candidates to a more formal appearance is a question. Which will approach, casual or formal, would give voters what the need to decide - and how do we test that out?
Design implementation constraints:
install something like story book, and use it to test the app/components/web-components/undebate.jsx
It's very dependent on the browser and the events that occur on playing video.
undebate.jsx is a very dense component and it absolutely needs to be broken down - but that's a different task.
Things to test for include:
did all of the text fit on the screen.
does video start playing after you hit the begin button.
do the windows rotate after the first video clip has played.
does the next speaker button advance to the next clip.
There are lots of tests to do, but the point is to setup the structure to do it, and hopefully to automate it, even if you still have to do it on a browser.
For slow links we could use a lower quality video stream.
We could measure the speed, and if the link is so slow that it can't play in real time, then we could prefetch an entire section before playing it, and begin prefetching the next section after that - but not start playing it until the whole section is available. --- But the device needs to have enough memory to support this.
After a voter watches an undebate in a primary election, we want to ask the voter if they have a question they would like the candidates to answer in the general election.
The challenge is that we can have 1000's of voters posing questions, and we need to get it down to about 4 questions. The questions need to be different - instead of 4 questions about immigration (for example), we should have one about immigration, 1 about the economy, .... And a requirement is that this has to be totally automated in a way that can deployed for elections across the country - without requiring staff to sort the questions - all the work must be done by the voters.
Also, we can't bias people by pre-defining categories or initial questions.
We want video of them asking the question, so we can play it in the undebate and then have the candidates answer it. We probably want the question to go something like "I'm David from Irvine and my question is What are you going to do to fix congress?".
After a voter records a question, we need them to review 5 or 6 questions (taken at random) that other voters have proposed, and choose which one or two would be the most valuable for the community. - They could voter for their own, or not.
If there are a lot of questions, recorded by voters, we will want to ask the voter to review another round of 5 or 6, and choose.
We need to start with a UX design/investigation of this, and of course question the assumptions above.
You can try out the https://cc.enciv.org/schoolboard-conversation-candidate-recorder and https://cc.enciv.org/schoolboard-conversation
To see what recording is like right now.
The ideal starting point for this use case is right after a voter has finished watching an undebate.
We need to provide a site map to web crawlers that lists all the elections. We also need to update our response to robots.txt to be accurate, and include the site map.
references:
https://www.sitemaps.org/protocol.html
https://moz.com/learn/seo/robotstxt
The tool to generate the sitemap should run whenever the link is accessed, and it should query the database for all the viewer records, and then translate the viewer records into urls.
start with the code for getIota() in app/server/server.js
but create a new file in server/routes for sitemap for this.
Here is a query for the viewers for elections:
{"bp_info.stage_id": {$exists: true}, "bp_info.election_date": {$gte: 2020_01_01}}
After a candidate records their part in a conversation - give them a link they can send out to their followers, that just has their part in it - that's more of a one on one interview.
Ballotpedia has an API to access election information. The docs are at: https://ballotpedia.org/API-documentation.
For the 2020 year, we need to read through their information and generate the viewer and candidate recorder database entries, and links.
On startup we need to read in the information, and then compare it to what we already have and make updates as necessary. Ideally there will be some event letting us know when data changes, and then we can read through again and compare and update, and even more ideally the event will include what's new/changed. But for now, we don't have the even so we will have to read and compare on startup, and periodically read and compare again.
User google (or some) transcription service to get text from the videos, add it to the UI somehow.
Create a dashboard view of the status of elections - so that we can easilly see which elections have undebate, how many candidates in the election, how many we have emails for, and how many have recorded.
We especially want to be able to zero in on elections where there is at least on candidate has recorded, but not all candidates have.
It would also be useful to see which races have the most views. in the last time period (day , week , etc).
Also need to be able to ignore past elections.
app/components/web-components/undebate.jsx is over 2K lines and needs to be broken down and refactored into many smaller components.
Create an about page - for candidate conversations. - Needs more definition of content but we can create a place holder.
The EnCiv Logo is at the bottom left of the screen. In #30 we are adding the Election title to the top of the page, so we have room to move the EnCiv Logo to the top right of the page. Then, lets convert the logo into something that opens up and shows menu options when you hover over it with the mouse, or click/touch it. It can have an about option, and a terms option. If you click on the logo again, our stop hovering over it, the menu should collapse.
As for the about page itself, we just need a place holder for now. We'll have to get text later.
The about page should be a react component, ideally with some kind of animated open or transition, not a sudden change.
After there are enough candidates participating in a conversation, send the URL of the conversation to BP via an API so that they can make it viewable to voters. "Enough" might be 1, or it might be 2, or it might be 50% or something else - we have to decide.
Making the number of visits apparent builds credibility, so they will know next time.
This is a high level feature that needs to be flushed out and priorized.
The README.md here talks about how to deploy this on heroku. The task would be to figure out how to deploy this on google cloud and create an equivalent document.
Ultimately we are looking for autoscaling features, and the lowest cost for our poor unfunded nonprofit. But we need to start somewhere.
This project uses the debug package
Debug has been tagged by the audit package as having low severity vulnerabilities.
Debug should be updated, unless there is a reason not to update it.
The screen layout depends on the font size and the width and height. If we end up with a new smartphone that we haven't seen before, it will make it's best guess about what font size to use and where to put things. But sometimes that's wrong and the begin button ends up below where you can click on it, or the agenda window is in a bad place. We could take the calculated size, and then check if the Begin button is below the viewport, and if it is, adjust the font size, and then send the information back to the server to update the file. Logger.error with the info is a good way to start with sending the info back, or we could go really robust and convert the whole thing to a db collection or something.
We currently have info for the Samsung Line through the 9, and for the iPhone line through the X baked in.
In app/components/web-components/undebate.jst
calcFontSize() is where the fontsize is looked up or calculated.
./resolution-to-font-size-table.js is where the know fontSizes are stored.
How can we do this better?
On AWS Amplify, try to create get the web app running. Probably starting with App.
And investigate moving the recorded videos to S3, but make sure that a secret key is not being sent to the browser.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.