This week we formed in the same group as the previous weeks and set up a circuit that would control an external circuit through a Reed Relay.
The Reed Relay would only complete the external circuit if there is a current flowing through the Arduino circuit. As a simple test we used the example blinking LED program as the Arduino circuit and then used the Arduino's +5v output to simulate a mains connection, or any constant current.
Here are some pictures of the circuit...
After successfully getting these 'circuits' running, we then replaced the external circuit with a mains connection, and received the same results at the slightly higher voltage.
Here is a video of the updated circuit...
After doing this exercise with Reed Relays I think that using them in to do more actions with mains circuits would be a very interesting and useful path to brainstorm about.
This week we looked at how signals from the analogue output on the Arduino could be used in other applications. I worked in the group I have worked in before because we are feeling like a team now, and know how to share resources and complete tasks effectively.
To begin with, we were provided with a piece of code to upload to the Arduino that could constantly write and read to the serial port...
#include
void setup() {
// The following command initiates the serial port at 9600 baud. Please note this is VERY SLOW!!!!!! // I suggest you use higher speeds in your own code. You can go up to 115200 with the USB version, that's 12x faster Serial.begin(9600); //Baud set at 9600 for compatibility, CHANGE!
}
void loop() {
if (messageBuild() > 0) { // Checks to see if the message is complete and erases any previous messages switch (messageGetChar()) { // Gets the first word as a character case 'r': // Read pins (analog or digital) readpins(); // Call the readpins function break; // Break from the switch case 'w': // Write pin writepin(); // Call the writepin function }
}
}
void readpins(){ // Read pins (analog or digital)
switch (messageGetChar()) { // Gets the next word as a character
case 'd': // READ digital pins
messageSendChar('d'); // Echo what is being read for (char i=2;i<14;i++) i="0;i<6;i++)" pin =" messageGetInt();" state =" messageGetInt();" pin =" messageGetInt();" state =" messageGetInt();">
We then uploaded the code to the Arduino. Initially, the board was not responding to the commands being sent to it. But after some tweaking of the COM ports, it was sending signals. After we got the Arduino successfully broadcasting we connected a potentiometer to it in a simple circuit, as shown below...
We were then introduced to the new application that would be using the serial signal from the Arduino, Max/MSP. Max/MSP is a MIDI/Audio/Video processing program that we will use to produce a square wave according to parameters set by the serial output from the Arduino.
Max/MSP is a visal/object orientated programming environment we will use in real time to produce the square wave. The setup we used from the example (and then adapted to then output to the motherboard speaker) looked like this:
This setup in Max/MSP, coupled with the variable output using the potentiometer from the Arduino through serial, produced a square wave that would change frequency according to the value on a certain pin that the potentiometer was connected to.
Here is a video of the setup running... After conducting this exercise I think I should start brainstorming ideas for a larger more useful circuits. A good way to start this would be to sketch very basic diagrams of their circuits.
This week, three of us formed as a group again to share resources and practiced developing the Arduino code further, to include arrays and additional loops.
We initially didn't do what we were suggested to do, but instead began by setting up an Arduino that connected to 6 LEDs via separate digital pins. We then adapted a piece of code from the example library to execute two loops (inside an infinite loop) which would light up the LEDs in sequence and then do the sequence in reverse and repeat.
Here is a picture of the circuit... And a video of the same setup working...
After we had finished this, we experimented with the code from week 2 and added a potentiometer to the circuit to input a value to an analogue pin and change the speed of the flashing LEDs (more along the lines of what was asked of us for the workshop).
Here is the final code:
int potPin = 2; int pins[] = { 2, 3, 4, 5, 6, 7 }; int num_pins = 6; int val = 0;
void setup() { int i;
for (i = 0; i < i =" 0;" val =" analogRead(potPin);" i =" num_pins">= 0; i--) { val = analogRead(potPin); digitalWrite(pins[i], HIGH); delay(val+50); digitalWrite(pins[i], LOW); Serial.println(val); } }
And here is a video of the final circuit working...
After performing this exercise, I feel yet more confident in programming and making circuits using Arduino technology and software. It would be a good idea to start thinking up ideas for a larger scale project that could have use in the public world.
The device is essentially a brain controlled peripheral that reads electrical signals from your brain through 3 carbon sensors and turns them into in-game actions – allowing users to control PC games without the use of a keyboard and minimal use of a mouse.
OCZ promises that average users will be able to begin using the device within hours after some initial practice. Of course, use of the device at its full potential will require some significant amounts of training, but OCZ claims the NIA can cut reaction times by as much as 60 percent over a conventional mouse controller. No concrete details on pricing have been confirmed yet, but sources claim the NIA should sell in the $300 range when it becomes available.
This could provide either an enormously immersive form of interaction, or become a tedious or unrewarding experience for users... won't know until it comes out, but it certainly is a step in the right direction and something to at least anticipate.
The piece in particular that I took notice of was "The Camera-Eye" that he made around 1919. (See sketch below)
"Picabia explored the idea of machines replacing humans or taking on human functions, in this case, the camera replacing the human eye." - Exhibit description (next to painting)
I liked the meaning behind this because of how on some levels, this is true. Machines have been used to make a human's job easier, (computers etc.) right up to performing human functions (heart/lung support machines).
The description from the exhibition then goes on to say how Picabia's grandfather, who was an amateur photographer, predicted to Picabia how photography would replace paintings. This, of course, has been the case and painting is not any where near as popular as photography in the public world nowadays.
I decided to look into the term "Maverick Machines" and how this name could have been conceived.
A quick search found that there was an exhibition called "Maverick Machines" in Edinburgh (http://maverickmachines.com/WordPress/). On this website it gives the description "machines that are a little unusual" which would fit what maverick means (independent, uncommon), but it does not give a description on what relation this has to interface, so I decided to explore further into what this exhibition was about.
On the website there is a video of the exhibition and the exhibits in action...
From this video it is clear that the machines from the exhibition were VERY unusual, and in fact not very useful in the public world (being as how they feature little interaction or function). However, this exhibition does show how experimentation and innovation gives us interfaces that we take for granted today.
So I decided to look into new interfaces starting to become successful in the public world. One of the biggest innovations (although not new) is touch screen technology. Touchscreen technology allows users to simply use the screen as a method of input. This technology, although it has been around for decades, has only started seeing major implementation in recent years, is now being included for a huge amount of different devices.
Touch screen is seeing particular success in the mobile market, where it works very well along other emerging technologies (such as motion and tilt sensing). These ways of interacting with a device are becoming a sort of "standard" for mobile developers (phone/PDA developers, etc.).
Touch screen is also being used to simplify common things we do in every day life. Microsoft Surface is a real milestone when it comes to the simplest device-free interaction.
As you can see this tabletop computer has been reduced down to the basic components needed for interaction, basically a screen. Another 'cool' feature of this development is how objects are immediately recognised when they are placed down on the table and act independently as a sort of 'Node' that can have files dropped from other devices by 'physically' dragging them.
Now although this development for making things simpler is very cool and interesting, it is also important to give users more in-depth control over their computers as well. Developers cannot forget that some users will want that extra bit of function coupled with the good usability of a device like this.
It is clear from doing this research that levels of interface are sometimes too revolutionary and perhaps come too soon to see real success (if they are actually any good).